Dev Blog #1: Image Uploads and Resizing

Posted on 7/17/2020

I hope the greasy steak pictures helped convey the sensation of a fat gushing out of a morsel of wagyu beef as you bite down. The image uploading process went through two slightly different iterations, and it was pretty quick--maybe three or four hours from start to finish. I was also inspired by a few problems that I've solved while working on two projects at work.

The company I work for sells a document management platform called Real File. It's actually pretty neat; in addition to making your files available on the cloud, it also has the ability to run a small program on your computer to sync virtual copies of your files across all of your machines, and some cool web integrations. The project first started about fifteen years ago, though, before cloud-based solutions really took off. I never actually saw this version in production, but from what I've gathered, customers used to upload their files, and we would store them in a regular file system on some servers in the back. When Amazon S3 became widely available, it was quickly patched in with a little hack: an additional application was added that would crawl the file system, and push any files it found up to the cloud. It worked to an extent, but we ran into problems with race conditions and lock files and storage space and memory. Eventually I implemented a pretty obvious solution. A logged in user could request temporary credentials to S3, upload their file directly, and then notify our servers when the upload was complete.

The solution I ended up with for image uploads on the blog isn't quite the same, but it uses all of the lessons I learned from working on Real File. I found a neat package called multer-s3 that handles streaming multipart files to S3. Streaming is invaluable to me, because the blog is right now running on AWS Lambda with 128mb of ram. Any large files that needed to be stored in memory before uploading to S3 would almost certainly crash the process. The files I've uploaded have all been around 3-4mb, but I'm not sure if larger files might have problems caused by sending file chunks to different Lambda containers.

Here's what that solution looks like:

const upload = multer({  storage: multerS3({    s3: s3,    bucket: config.s3bucket,    acl: "public-read",    contentType: multerS3.AUTO_CONTENT_TYPE,    metadata: function(req, file, cb) {      cb(null);    },    key: function(req, file, cb) {      cb(null, "images/" + uuid() + ".jpg")    }  })})'/api/UploadImage', passwordProtect, upload.single('file'), function(req, res, next) {  res.json({    status: "ok",    data: config.websiteName + "/s/" + req.file.key  })})

It's about as simple as a file upload can get, and so far it's been working perfectly.

The next issue is with the size of the image. They're coming from an iPhone, so they're not huge, but they're definitely not small enough to download a bunch of them on page load. I could crunch them before uploading, but that's a pain, and I'd have to do several sizes for every image. If I were a client asking for an image uploading feature on my blog, I definitely wouldn't want to have to do that.

I did a lot of work on another big public-facing app called Adwallet. Once or twice a week, the app will send you a notification, and you can watch a 30-second ad and make 50ยข. Since people are often watching these ads on mediocre mobile connections, we needed the videos to be small enough to load quickly. As part of our nightly batch processes, I added a function that took new video uploads and crunched them into a few different sizes with Amazon Elastic Transcoder. The different video sizes had "_s", "_m", and "_l" appended to the name, depending on the size.

That's basically what I initially did in this project to shrink my image files, except I used the wonderful jimp to compress and resize the images all inside the Node.js process. However, after the initial version, I realized a key difference between Adwallet's use case and my own is that I'm compressing 3mb image files, not 100mb video files. So instead of pre-compressing the images, I changed it to work on-the-fly. Any image can be requested with either "s", "m", or "l" in the path, or with a number that represents the size of a square you want the image to fit into, and jimp compresses it on the fly, caches it, and sends it up to you. The 128mb of ram container is definitely suffering a bit, but I'm caching images for a long time, so that will lessen the load. Here's the code:

app.get("/images/:size/:image", async function(req, res) {  var image = req.params.image;  var size = req.params.size;  var sizes = {    "xs": 320,    "s": 640,    "m": 1280,    "l": 2560  }  if (!isNaN(parseInt(size))) {    size = parseInt(size)  } else {    size = sizes[size];  }  if (size) {    res.contentType('image/jpeg')    var url = "" + config.s3bucket + "/images/" + image;    var img = await    await img.scaleToFit(size, size).quality(70).writeAsync("/tmp/" + image);    res.sendFile("/tmp/" + image)  } else {    res.status(400)    res.json({      status: "error",      message: "Invalid size"    })  }})

I am still writing the file to the filesystem before sending it to the client; I haven't figured out if that's absolutely necessary or not.

Lots of updates to the blog are planned for the near future! Stay tuned for menus, categories, and tags (how exciting), tomatoes, and a Strava API integration!