Build a Amazon Lambda Layer for NPM Packages via a Lambda Function
Sometimes, npm packages are too big for an Amazon AWS Lambda Function. And sometimes, some tools (like git
, I found) just aren’t available in a Lambda environment.
The solution to these problems is to install big npm packages or non-existent tools in Lambda Layers.
The problem is that installing a layer compatible with the Amazon Linux runtime is a little tricky. Unless you’re just deploying generic code, you need to write a whole Lambda function to install the packages and create the layer.
Along the way, there are a few gnarly problems, including:
- You can’t just run
npm install
in a Lambda Function — you have to specify the temp directory quite aggressively - You can’t just upload a huge file as a layer directly from a function — you have to go via S3
Specifically, I wanted to install puppeteer and chromium in a function but ran into the problems of size. So I had to come up with the below approach.
The Process for Creating a Lambda Layer via a Lambda Function
This code automates the process of creating a Lambda Layer. Here’s what it does, step by step:
- Installs specified NPM packages within Amazon Linux
- Does this within Lambda to ensure the installed package is compatible with the Lambda environment. You could do this in a similar Linux environment, but that is risky, and anyway, I didn’t have one!
- Makes sure it uses the
/tmp
folder with npm in a Lambda environment
- Creates a zip file of the installed packages (you can use any zip package)
- Creates an S3 bucket and uploads the zip to it
- Creates a Lambda Layer using the S3 file in the bucket
- Cleans up by deleting the S3 bucket
To keep things simple, this code below makes a few assumptions and simplifications:
- Shared Namespace: It assumes the S3 bucket, Lambda Layer, and functions are all in the same AWS namespace. This means we don’t need to handle separate authentication for Lambda and S3 operations.
- Cleanup: The code creates and then destroys an S3 bucket for each Layer creation. In a production environment, you might want to reuse a bucket instead.
- Custom functions: The code uses some custom functions like
createZipFile
anduploadToS3
. These aren’t standard Node.js functions — they’re helpers I created to handle specific tasks. You’ll have to implement these yourself.
Note: Your Lambda Layer has to have enough resources to run. Running npm install
can take a while! I gave mine a timeout of five minutes and 1024MB of memory. (Zipping uses memory.)
The Code
Below is the code for the function. Bear in mind the caveats and assumptions above.
import { createZipFile } from '/opt/createZip.mjs';
import { generateBucketName, createS3, uploadToS3, destroyS3 } from '/opt/S3Utils.mjs';
import { Lambda } from '@aws-sdk/client-lambda';
import fs from 'node:fs/promises';
import path from 'node:path';
import { execSync } from 'node:child_process';
const lambda = new Lambda();
export const handler = async (event) => {
console.log(`Received event: ${JSON.stringify(event)}`);
try {
const { packages, layerName } = event;
if (!packages || !layerName) {
throw new Error('Missing required parameters: packages or layerName');
}
console.log(`Creating Lambda Layer: ${layerName} with packages: ${packages}`);
// Step 1: Install packages
const tempDir = '/tmp/layer';
await fs.mkdir(tempDir, { recursive: true });
const packageJson = {
name: layerName,
version: '1.0.0',
dependencies: {}
};
await fs.writeFile(path.join(tempDir, 'package.json'), JSON.stringify(packageJson, null, 2));
console.log('Installing packages...');
const customCacheDir = path.join(tempDir, '.npmcache');
process.env.NPM_CONFIG_CACHE = customCacheDir;
const packageList = packages.split(' ');
for (const pkg of packageList) {
console.log(`Installing ${pkg}...`);
execSync(
`npm install ${pkg} --save-exact --no-package-lock --no-audit --no-fund --omit=dev`,
{
cwd: tempDir,
env: {
...process.env,
HOME: tempDir,
NPM_CONFIG_CACHE: customCacheDir,
npm_config_update_notifier: 'false'
},
stdio: 'inherit'
}
);
}
console.log(`Deleting npm cache...`);
await fs.rm(path.join(tempDir, ".npmcache"), { recursive: true, force: true });
console.log(`NPM cache deleted.`);
// Step 2: Create zip file
const zipFileName = `${layerName}.zip`;
const zipFilePath = await createZipFile(tempDir);
console.log(`Created zip file: ${zipFilePath}`);
// Step 3: Create S3 bucket
// create a random name for the S3 bucket
const bucketName = generateBucketName(layerName);
const createBucketResult = await createS3(bucketName);
if (createBucketResult.statusCode !== 200) {
throw new Error(`Failed to create S3 bucket: ${createBucketResult.body}`);
}
console.log(`Created S3 bucket: ${bucketName}`);
// Step 4: Upload zip to S3
const uploadResult = await uploadToS3(bucketName, zipFilePath, '');
if (uploadResult.statusCode !== 200) {
throw new Error(`Failed to upload to S3: ${uploadResult.body}`);
}
const uploadBody = JSON.parse(uploadResult.body);
console.log(`Uploaded zip file to S3: ${uploadBody.s3Url}`);
// Step 5: Create Lambda Layer
const createLayerResponse = await lambda.publishLayerVersion({
LayerName: layerName,
Description: `Layer containing node_modules for: ${packages}`,
Content: {
S3Bucket: bucketName,
S3Key: path.basename(zipFilePath),
},
CompatibleRuntimes: ['nodejs20.x'],
CompatibleArchitectures: ['x86_64'],
});
console.log(`Created Lambda Layer: ${JSON.stringify(createLayerResponse)}`);
// Step 6: Destroy S3 bucket
const destroyBucketResult = await destroyS3(bucketName);
if (destroyBucketResult.statusCode !== 200) {
console.warn(`Failed to destroy S3 bucket: ${destroyBucketResult.body}`);
} else {
console.log(`Destroyed S3 bucket: ${bucketName}`);
}
return {
statusCode: 200,
body: JSON.stringify({
message: 'Lambda Layer created successfully',
layerArn: createLayerResponse.LayerArn,
layerVersion: createLayerResponse.Version,
}),
};
} catch (error) {
console.error(`Error in Lambda handler: ${error.message}`);
return {
statusCode: 500,
body: JSON.stringify({ error: 'Internal Server Error', details: error.message }),
};
}
};
Using the Layer
Once you’ve created the layer, you can use it in code like you would any function you installed via npm install. You just have to attach the layer.
You can do this programmatically via aws-sdk
or you can do it via the web interface.
My Test Case
As I mentioned above, I specifically wanted to install puppeteer and chromium. Here’s what I did.
First, you have to use the latest possible version of puppeteer-core, and then the compatible version of @sparticuz/chromium, a version of chromium suitable for headless environments like Lambda (see here).
Puppeteer | Chrome | Firefox |
---|---|---|
Puppeteer v23.2.0 | Chrome for Testing 128.0.6613.84 | Firefox 129.0.2 |
Puppeteer v23.1.1 | Chrome for Testing 127.0.6533.119 | Firefox 129.0.2 |
E.g. the version of puppeteer at present is v23.2.0. Per their support page, the latest compatible version of chromium is 128.
But the latest version of that @sparticuz/chromium package is actually 127 now, so I have to use a slightly older version of puppeteer.
This means the packages to install have to be “[email protected] @sparticuz/chromium@127”.
So I created test event JSON for my function above:
{
"packages": "[email protected] @sparticuz/chromium@127",
"layerName": "puppeteer"
}
In running this test case, I actually found a bug in my code that didn’t let me specify specific versions as they’re not HTML-compatible strings, but I updated the code to use it!
Wrapping up
The above function solved a number of problems for me. I hope it does for you, too!
If you have any issues, drop a comment below.