Here you can find the most common open-next.config.ts
file examples that you could easily take as a starting point for your own configuration.
Streaming with lambda
AWS has a bunch of different implementations of streaming in production. You
might be lucky and have a fully working one, but you might have one that are
not suitable for production. Be aware that there is nothing to do to prevent
AWS from breaking your deployment (same code and same runtime might break from
one day to another) Thread
1 (opens in a new tab)
Thread
2 (opens in a new tab)
On some AWS accounts the response can hang if the body is empty. There
is a workaround for that on OpenNext 3.0.3+, setting environment variable
OPEN_NEXT_FORCE_NON_EMPTY_RESPONSE
to "true"
. This will ensure that the
stream body is not empty.
If you have an issue with streaming send a message on discord (opens in a new tab)
and contact AWS support to let them know of the issue.
import type { OpenNextConfig } from 'open-next/types/open-next';
const config = {
default: {
override: {
wrapper: 'aws-lambda-streaming', // This is necessary to enable lambda streaming
},
},
} satisfies OpenNextConfig;
export default config;
Splitting the server
import type { OpenNextConfig } from 'open-next/types/open-next';
const config = {
// This is the default server, similar to the server-function in open-next v2
// In this case we are not providing any override, so it will generate a normal lambda (i.e. no streaming)
default: {},
// You can define multiple functions here, each with its own routes, patterns and overrides
functions: {
myFn: {
// Patterns needs to use the glob pattern
patterns: ['route1', 'route2', 'route3'],
// For app dir, you need to include route|page, no need to include layout or loading
// It needs to be prepended with app/ or pages/ depending on the directory used
routes: ['app/route1/page', 'app/route2/page', 'pages/route3'],
override: {
wrapper: 'aws-lambda-streaming',
},
},
},
} satisfies OpenNextConfig;
export default config;
Use aws4fetch instead of AWS sdk
By default we use S3, DynamoDB and SQS for handling ISR/SSG and the fetch
cache. We interact with them using AWS sdk v3. This can contribute a lot to
the cold start. There is a built-in option to use
aws4fetch (opens in a new tab) instead of the AWS sdk that
can reduce cold start up to 300ms. Requires OpenNext v3.0.3+
. Here is how
you enable it:
import type { OpenNextConfig } from 'open-next/types/open-next';
const config = {
default: {
override: {
tagCache: 'dynamodb-lite',
incrementalCache: 's3-lite',
queue: 'sqs-lite',
},
},
} satisfies OpenNextConfig;
export default config;
Running in Lambda@Edge
import type { OpenNextConfig } from 'open-next/types/open-next';
const config = {
default: {
placement: 'global',
override: {
converter: 'aws-cloudfront',
},
},
} satisfies OpenNextConfig;
export default config;
Bundle for a classic Node server (With function splitting)
This is not implemented in sst yet. You'll have to use your own IAC construct to deploy this.
Be aware that this uses the exact same system for ISR/SSG as the default lambda setup. So it will have to have all the proper permissions and env variable to interact with S3, DynamoDB and SQS (Or whatever you override it with). You can see here for more details
import type { OpenNextConfig } from 'open-next/types/open-next';
const config = {
// In this case, the default server is meant to run as a classic Node server
// To execute the server you need to run `node index.mjs` inside `.open-next/server-functions/default`
default: {
override: {
wrapper: 'node',
converter: 'node',
// This is necessary to generate a simple dockerfile and for the generated output to know that it needs to deploy on docker
// You can also provide a string here (i.e. the content of your Dockerfile) which will be used to create the dockerfile
// You don't have to provide this if you plan on not using docker, or if you plan on using your own custom dockerfile
generateDockerfile: true,
},
},
// You can define multiple functions here, each with its own routes, patterns and overrides
functions: {
// In this case both the api route is in lambda and the rest is in node
myFn: {
// Patterns needs to use the glob pattern
patterns: ['api/*'],
routes: ['app/api/test/route', 'app/api/test2/route'],
},
},
} satisfies OpenNextConfig;
export default config;
Edge runtime splitted function
This will generate 2 server functions, the default one and the edge one. The edge one will still be deployed as a lambda function, but it will be deployed in the edge runtime.
Edge runtime function have less cold start time, but you can only deploy one route per function. They also do not have the middleware bundled in the function, so you need to use external middleware if you need it in front of the edge function.
import type { OpenNextConfig } from 'open-next/types/open-next';
const config = {
default: {},
functions: {
edge: {
runtime: 'edge',
//placement: "global", If you want your function to be deployed globally (i.e. lambda@edge) uncomment this line. Otherwise it will be deployed in the region specified in the stack
routes: ['app/api/test/route'],
patterns: ['api/test'],
},
},
} satisfies OpenNextConfig;
export default config;
External middleware
In some cases (edge runtime, function splitting with some middleware rewrites, etc) you might want to use external middleware. With the default middleware configuration, it is bundled for a deployment in lambda@edge. This is how you can do it:
import type { OpenNextConfig } from 'open-next/types/open-next';
const config = {
default: {},
functions: {
myFn: {
patterns: ['route1', 'route2', 'route3'],
routes: ['app/route1/page', 'app/route2/page', 'pages/route3'],
override: {
wrapper: 'aws-lambda-streaming',
},
},
},
middleware: {
external: true,
},
} satisfies OpenNextConfig;
export default config;