-
Notifications
You must be signed in to change notification settings - Fork 270
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ipx cache generated images locally #795
Comments
This will be added as https://unstorage.unjs.io/drivers/fs , UnStorage is part of Nitro and this is engine of Nuxt 3. |
I believe @pi0 is already working on this, with exciting things to come 😁 |
Hi @danielroe, will this feature also be released in the v0 branch for Nuxt 2? |
It's unlikely unless there's a PR submitted to do it. |
Is a cache for nuxt generate available in the nuxt 3 version of nuxt image? I am only having ~100 images yet on a website, but nuxt generate already takes ~7 minutes. |
The generate part isn't even the biggest issue. When running in SSR/ISR we're seeing huge amounts of RAM usage on our server because IPX is constantly regenerating images. Before ISR and static generate: we had like 50MB RAM usage. Now we have like 500MB-1GB. Not what we envisioned when we went from a monolithic approach using a PHP CMS to moving headless using Nuxt. We don't have a CDN and also won't get one for doing image manipulation and storage so we'd very welcome a local image cache for IPX so it won't regenerate every time. IPX is great, but unusable in this form for production on SSR :-/ To get the scope of what's using those huge amounts of RAM: it's a one-pager with like 30 images in it total.. |
This is what happens under heavy load, then node restarts... |
Can anyone tell if a possible solution could be to cache the images via nginx? |
Not as long as the generated images don't have a unique hash. |
is there any work around for IPX catching on SSR ? my server is too humble to be able to handle this kind of regeneration on every request |
I would really appreciate local caching:
|
I was researching the same problem. In the release notes for v.1.0.0-rc.1 it mentions "Support multi-sources and server-side caching using Nuxt 3's built-in Storage Layer for the default image optimizer (unjs/ipx)" But is this somethign different, or just not configured/implemented correctly? It used to be on the roadmap for ipx v2: unjs/ipx#171 but we are on v3 by now, and I think it is still not supported |
I found a workaround: Nuxt has a hybrid mode, with which you can prerender and cache routes. https://nuxt.com/docs/guide/concepts/rendering#hybrid-rendering So what I have done is prerendering my index, which links all the images. This leads to nuxt also prerendering all the images. Thus not generating them on each request. |
This is the real issue and unfortunately a long standing bug. It makes SWR impossible to use as you only get broken images after the first cache miss. Prerendering isn't a viable option usually when doing SSR as most people don't do SSR for the fun of it but because it's necessary in their (and my) use case. Prerendering is also problematic if you have lazy loaded components as those images won't be generated at all because the prerenderer can't reach them. A working SWR however would migitate this issue massively. |
Actually, I found the solution to using SWR and IPX by setting This doesn't cache ipx images, but at least it let's you use SWR for the rest of the site. This really should get documented somewhere, it took me almost 1 year to find the solution to using ipx+swr and by the looks of it I'm not even remotely alone with this.. |
Another solution would be to use Varnish Cache. This is a pretty drastic measure but it works well (for me at least) |
Same here, added varnish cache in front of all |
Unfortunately Varnish isn't a solution, it's a mere workaround that isn't available to everyone.. |
I have been struggling with this for a while and created a solution that works for me. Since I found this page often when I was searching for solution I will share it here. What I did was creating a separate ipx entity. I run this with the Programatic API and express. I added a middlware to check if the file already exists in the cache. If not I capture the output and store it for a potential next request. In my project root I created a new ipx.js import { listen } from "listhen";
import express from "express";
import {
createIPX,
ipxFSStorage,
ipxHttpStorage,
createIPXNodeServer,
} from "ipx";
import fs from "fs";
import path from "path";
import { Writable, PassThrough } from "stream";
const cacheDir = "./.ipx-cache";
// Ensure cache directory exists
if (!fs.existsSync(cacheDir)) {
fs.mkdirSync(cacheDir, { recursive: true });
}
// Custom Writable Stream to capture response data
class CaptureStream extends Writable {
constructor(options) {
super(options);
this.chunks = [];
}
_write(chunk, encoding, callback) {
this.chunks.push(chunk);
callback();
}
getBuffer() {
return Buffer.concat(this.chunks);
}
}
// Middleware to check and cache the response
const cacheMiddleware = (req, res, next) => {
const cacheFilePath = path.resolve(path.join(cacheDir, req.originalUrl));
console.log('Checking cache', cacheFilePath);
if (fs.existsSync(cacheFilePath)) {
console.log('Cache found', cacheFilePath);
res.sendFile(cacheFilePath);
} else {
console.log('Cache not found', cacheFilePath);
// Create a PassThrough stream to capture the response data
// Create CaptureStream to capture response data
const captureStream = new CaptureStream();
const passThrough = new PassThrough();
passThrough.pipe(captureStream);
// Override res.write and res.end to use PassThrough stream
const originalWrite = res.write.bind(res);
const originalEnd = res.end.bind(res);
res.write = (chunk, encoding, callback) => {
console.log('res.write');
passThrough.write(chunk, encoding, callback);
return originalWrite(chunk, encoding, callback);
};
res.end = (chunk, encoding, callback) => {
if (chunk)
passThrough.write(chunk, encoding);
passThrough.end();
return originalEnd(chunk, encoding, callback);
};
res.on('finish', () => {
// Create dit if it doesnt exist
if (!fs.existsSync(path.dirname(cacheFilePath)))
fs.mkdirSync(path.dirname(cacheFilePath), { recursive: true });
// Write captured data to cache file
console.log('Writing cache: ', cacheFilePath);
fs.writeFile(cacheFilePath, captureStream.getBuffer(), (err) => {
if (err) {
console.error('Error writing cache file:', err);
}
});
});
// Continue with IPX processing
next();
}
};
const ipx = createIPX({
storage: ipxFSStorage({ dir: "./public" }),
httpStorage: ipxHttpStorage({ domains: ["picsum.photos"] }),
});
const app = express();
app.use(cacheMiddleware);
app.use("/", createIPXNodeServer(ipx));
listen(app, { port: 4000 }); In my nuxt.config.js I configurered this twice: export default {
// ...
image: {
provider: 'ipx',
ipx: {
baseURL: 'http://localhost:4000/', // Point to your external IPX server
},
},
// ...
routeRules: {
'/_ipx/**': { proxy: { to: 'http://localhost:4000/**' } },
},
// ...
} In my package.json I added the following {
"scripts": {
"generate": "(node ipx.js & echo $! > ipx.pid) && nuxt prepare && nuxt generate && kill $(cat ipx.pid) && rm ipx.pid",
},
"dependencies": {
"express": "^4.19.2",
"listhen": "^1.7.2",
"sharp": "^0.33.5",
},
} I hope this helps some folks out. For me it literally saves hours per deployment of my static generated websites. |
I modified @SvanThuijl solution to use middleware (add the file to import { defineEventHandler, sendStream } from "h3";
import { PassThrough, Writable } from "stream";
import { promises as fs, existsSync, createReadStream } from "fs";
import { mkdirSync } from "fs";
import path from "path";
import { ServerResponse } from "http";
export default defineEventHandler(async (event) => {
const cacheDir = path.resolve("./.ipx-cache");
const reqUrl = event.req.url || "";
if (!reqUrl.startsWith("/_ipx/")) {
// If the request is not for an IPX image, skip this middleware
return;
}
// Strip the /_ipx/ prefix from the URL to get the cache path
const strippedUrl = reqUrl.replace("/_ipx/", "");
const cacheFilePath = path.resolve(cacheDir, strippedUrl);
// Ensure cache directory exists
if (!existsSync(path.dirname(cacheFilePath))) {
mkdirSync(path.dirname(cacheFilePath), { recursive: true });
}
// Check if the file exists in the cache
if (existsSync(cacheFilePath)) {
// If the file exists in cache, send it directly
console.log(`Serving from cache: ${cacheFilePath}`);
return sendStream(event, createReadStream(cacheFilePath));
}
// Otherwise, capture the response stream to cache it
const originalRes = event.res;
const passThrough = new PassThrough();
const captureStream = new CaptureStream();
let responseEnded = false;
passThrough.pipe(captureStream);
// Modify the response object to capture the data
const originalWrite = originalRes.write.bind(originalRes) as (
chunk: any,
encoding?: BufferEncoding | ((error: Error | null | undefined) => void),
callback?: (error: Error | null | undefined) => void
) => boolean;
const originalEnd = originalRes.end.bind(originalRes) as (
chunk?: any,
encoding?: BufferEncoding | ((error: Error | null | undefined) => void),
callback?: () => void
) => ServerResponse;
originalRes.write = (
chunk: any,
encodingOrCallback?:
| BufferEncoding
| ((error: Error | null | undefined) => void),
callback?: (error: Error | null | undefined) => void
): boolean => {
passThrough.write(chunk, encodingOrCallback as BufferEncoding, callback);
return originalWrite(chunk, encodingOrCallback as BufferEncoding, callback);
};
originalRes.end = (
chunk?: any,
encodingOrCallback?:
| BufferEncoding
| ((error: Error | null | undefined) => void),
callback?: () => void
): ServerResponse => {
if (chunk)
passThrough.write(chunk, encodingOrCallback as BufferEncoding, callback);
originalEnd(chunk, encodingOrCallback, callback);
// Write to cache after response has ended
mkdirSync(path.dirname(cacheFilePath), { recursive: true });
fs.writeFile(cacheFilePath, captureStream.getBuffer())
.then(() => {
console.log(`Cached image: ${cacheFilePath}`);
})
.catch((err) => {
console.error(`Error caching image: ${err}`);
});
return originalRes;
};
return;
});
class CaptureStream extends Writable {
private chunks: Buffer[];
constructor(options?: any) {
super(options);
this.chunks = [];
}
_write(
chunk: any,
encoding: BufferEncoding,
callback: (error?: Error | null) => void
): void {
this.chunks.push(
Buffer.isBuffer(chunk) ? chunk : Buffer.from(chunk, encoding)
);
callback();
}
getBuffer(): Buffer {
return Buffer.concat(this.chunks);
}
} |
What if we are using Nuxt with static generation Is there a way to cache the |
I'm using it mostly for |
Ok I will try it! Can't we provide this functionality from nuxt config? imageCache: true |
I think the best case would be someone packaging this into a nuxt module and maintaining it there. This solution would also need a way to clear the cache, though, imo. Unless you want your disk space to fill up over time without ever getting freed. |
Well I agree! In my usecase I have around 50 images where in every build I wait for 2 minutes for all variations to generate. They don't change often. So the cache size wouldn't be an issue. If a nuxt module would be created, probably some options could exist like expiration date to recreate all images, or max cache size should be enough. |
I just tried it, and it worked! I reduced my build time by half while generating 120 IPX images. Initially, the build time was 4 minutes, but I brought it down to 2 minutes. 🎉🎉🎉 One thing that wasn't immediately clear is that you need to run the build locally on your computer, then push the cached images to the repository. This allows the CLI to serve the images from the cache. Thanks for sharing your modified solution @pierreleripoll |
You're welcome @alexookah (and big thanks to @SvanThuijl for the original solution!!!). I also set up a GitHub Action workflow for a project where I needed this feature: nuxtjs.yml. Basically, the first time the workflow processes images, it uploads the _ipx folder in .output/public as a zip file along with the whole app. The next build can then fetch this zip, decompress it, and use it as a cache. What’s also nice with this solution is that if you remove an image in a commit, it will be deleted from the cache in the next build! Hope this helps! |
True thanks @SvanThuijl. I d really love if this would be part of nuxtImage somehow like a module or some option. Thanks for the further explanation! might be helpful this Github Action! I also liked that the cache is repo manageable, but initially I thought that the cache is being added inside node_modules/cache which would be cached automatically in every build in CI (if cache is enabled e.g. in Cloudflare) |
If I find some time, I'd be happy to package this up in a nuxt module and maybe even integrate unstorage. We have an SSR project that basically kills all our server resources because ipx is constantly recreating tons of images. We migitated that somewhat by putting an SWR caching layer in front of it (which mostly helps the endnuser by getting served a cached image instead of needing to wait for recreation happening). However, since validation happens in the background, this still eats up tons of server resources for recreating images where there's no need. Filesystem cache for generated IPX images is the way out of the misery, but we'd still need a way to clear the cache. Let's see. |
Love to see the iterations. I made a few more changes:
import { defineEventHandler, sendStream } from 'h3'
import { PassThrough, Writable } from 'stream'
import { promises as fs, existsSync, createReadStream } from 'fs'
import { mkdirSync } from 'fs'
import path from 'path'
import { ServerResponse } from 'http'
export default defineEventHandler(async event => {
const cacheDir = path.resolve('./.ipx-cache')
const reqUrl = event.req.url || ''
// Check if the request is for an IPX image
if (!reqUrl.startsWith('/_ipx/'))
return
// Strip the /_ipx/ prefix from the URL to get the cache path
const strippedUrl = reqUrl.replace('/_ipx/', '')
const cacheFilePath = path.resolve(cacheDir, strippedUrl)
// Ensure cache directory exists
if (!existsSync(path.dirname(cacheFilePath))) {
mkdirSync(path.dirname(cacheFilePath), { recursive: true })
}
// Check if the file exists in the cache
if (existsSync(cacheFilePath) &&
// Prevent using cached images when not prerendering
process.env.NODE_ENV === 'prerender') {
// If the file exists in cache, send it directly
console.log(`Serving from cache: ${cacheFilePath}`)
return sendStream(event, createReadStream(cacheFilePath))
}
// Otherwise, capture the response stream to cache it
const originalRes = event.res
const passThrough = new PassThrough()
const captureStream = new CaptureStream()
let responseEnded = false
passThrough.pipe(captureStream)
// Modify the response object to capture the data
const originalWrite = originalRes.write.bind(originalRes) as (
chunk: any,
encoding?: BufferEncoding | ((error: Error | null | undefined) => void),
callback?: (error: Error | null | undefined) => void
) => boolean
const originalEnd = originalRes.end.bind(originalRes) as (
chunk?: any,
encoding?: BufferEncoding | ((error: Error | null | undefined) => void),
callback?: () => void
) => ServerResponse
originalRes.write = (
chunk: any,
encodingOrCallback?:
| BufferEncoding
| ((error: Error | null | undefined) => void),
callback?: (error: Error | null | undefined) => void
): boolean => {
passThrough.write(chunk, encodingOrCallback as BufferEncoding, callback)
return originalWrite(
chunk,
encodingOrCallback as BufferEncoding,
callback
)
}
originalRes.end = (
chunk?: any,
encodingOrCallback?:
| BufferEncoding
| ((error: Error | null | undefined) => void),
callback?: () => void
): ServerResponse => {
if (chunk)
passThrough.write(
chunk,
encodingOrCallback as BufferEncoding,
callback
)
originalEnd(chunk, encodingOrCallback, callback)
// Prevent writing if it was a failed response
if (originalRes.statusCode !== 200)
return originalRes;
// Write to cache after response has ended
mkdirSync(path.dirname(cacheFilePath), { recursive: true })
fs.writeFile(cacheFilePath, captureStream.getBuffer())
.then(() => {
console.log(`Cached image: ${cacheFilePath}`)
})
.catch(err => {
console.error(`Error caching image: ${err}`)
})
return originalRes
}
return
})
class CaptureStream extends Writable {
private chunks: Buffer[]
constructor(options?: any) {
super(options)
this.chunks = []
}
_write(
chunk: any,
encoding: BufferEncoding,
callback: (error?: Error | null) => void
): void {
this.chunks.push(
Buffer.isBuffer(chunk) ? chunk : Buffer.from(chunk, encoding)
)
callback()
}
getBuffer(): Buffer {
return Buffer.concat(this.chunks)
}
} |
I was using NuxtImage with svg and it seems it also was creating some variations with ipx but the cached variations were not correctly created. |
Why would you skip using the cache when not prerendering? It's still a very valid use case for SSR (probably even more so than for SSG as ipx eats up server resources like a hungry lion). |
The hash would actually solve why I implemented the disabling in dev mode. I personally only use this to generate static websites and develop locally so am not familiar with server loads. @alexookah I think the SVG issue is an unrelated issue. I have experienced this without the cache option. SVG is not working properly with nuxt image. |
Implemented a hash check: import { defineEventHandler, sendStream } from 'h3'
import { PassThrough, Writable } from 'stream'
import { promises as fs, existsSync, createReadStream } from 'fs'
import { mkdirSync } from 'fs'
import path from 'path'
import { ServerResponse } from 'http'
import crypto from 'crypto'
export default defineEventHandler(async event => {
const cacheDir = path.resolve('./.ipx-cache')
const reqUrl = event.req.url || ''
if (!reqUrl.startsWith('/_ipx/')) return
const strippedUrl = reqUrl.replace('/_ipx/', '')
const originalUrl = strippedUrl.split('/').slice(1).join('/')
const originalFilePath = path.resolve('./public', originalUrl) // Path to the original image
const cacheFilePath = path.resolve(cacheDir, strippedUrl)
const hashFilePath = `${cacheFilePath}.hash`
if (!existsSync(path.dirname(cacheFilePath))) {
mkdirSync(path.dirname(cacheFilePath), { recursive: true })
}
if (existsSync(cacheFilePath) && existsSync(hashFilePath)) {
const cachedHash = await fs.readFile(hashFilePath, 'utf-8')
const currentHash = await generateFileHash(originalFilePath)
if (cachedHash === currentHash) {
console.log(`Serving from cache: ${cacheFilePath}`)
return sendStream(event, createReadStream(cacheFilePath))
}
}
const originalRes = event.res
const passThrough = new PassThrough()
const captureStream = new CaptureStream()
let responseEnded = false
passThrough.pipe(captureStream)
const originalWrite = originalRes.write.bind(originalRes) as (
chunk: any,
encoding?: BufferEncoding | ((error: Error | null | undefined) => void),
callback?: (error: Error | null | undefined) => void
) => boolean
const originalEnd = originalRes.end.bind(originalRes) as (
chunk?: any,
encoding?: BufferEncoding | ((error: Error | null | undefined) => void),
callback?: () => void
) => ServerResponse
originalRes.write = (
chunk: any,
encodingOrCallback?:
| BufferEncoding
| ((error: Error | null | undefined) => void),
callback?: (error: Error | null | undefined) => void
): boolean => {
passThrough.write(chunk, encodingOrCallback as BufferEncoding, callback)
return originalWrite(
chunk,
encodingOrCallback as BufferEncoding,
callback
)
}
originalRes.end = (
chunk?: any,
encodingOrCallback?:
| BufferEncoding
| ((error: Error | null | undefined) => void),
callback?: () => void
): ServerResponse => {
if (chunk)
passThrough.write(
chunk,
encodingOrCallback as BufferEncoding,
callback
)
originalEnd(chunk, encodingOrCallback, callback)
if (originalRes.statusCode !== 200) return originalRes
mkdirSync(path.dirname(cacheFilePath), { recursive: true })
const buffer = captureStream.getBuffer()
fs.writeFile(cacheFilePath, buffer)
.then(async () => {
const hash = await generateFileHash(originalFilePath)
await fs.writeFile(hashFilePath, hash)
console.log(`Cached image: ${cacheFilePath}`)
})
.catch(err => {
console.error(`Error caching image: ${err}`)
})
return originalRes
}
return
})
class CaptureStream extends Writable {
private chunks: Buffer[]
constructor(options?: any) {
super(options)
this.chunks = []
}
_write(
chunk: any,
encoding: BufferEncoding,
callback: (error?: Error | null) => void
): void {
this.chunks.push(
Buffer.isBuffer(chunk) ? chunk : Buffer.from(chunk, encoding)
)
callback()
}
getBuffer(): Buffer {
return Buffer.concat(this.chunks)
}
}
async function generateFileHash(filePath: string): Promise<string> {
const fileBuffer = await fs.readFile(filePath)
return generateBufferHash(fileBuffer)
}
function generateBufferHash(buffer: Buffer): string {
return crypto.createHash('sha256').update(buffer).digest('hex')
} |
First of all thank you so much for the middleware! I read the nuxt page on server middlewares (https://nuxt.com/docs/guide/directory-structure/server#server-middleware). Here it says that "Middleware handlers should not return anything (nor close or respond to the request) and only inspect or extend the request context or throw an error.". In the case of the ipx middleware it returns a stream if it encounters, that the image is cached. Do you know what implications returning has? I'm guessing other middlewares will not be executed. |
Correct. |
Is there any specific version of
I suppose it coul be connected to this one: #1485 |
Hi,
I wonder if there is an option to cache images generated by ipx locally. Right now, it seems that ipx will regenerate a set of images every time
npm run generate
is run, which can get very cumbersome for 5000+ images. Previously, there seems to have been an optioncacheDir
as well asclearCache
, but my impression is that these options have been removed (and maybe they didn't do what I imagine they did). Is there any plan to support such a feature? It might be interesting in cases of static site generation.The text was updated successfully, but these errors were encountered: