A complete guide on how to take full page screenshots with Puppeteer, Playwright or Selenium

Updated on Dmytro Krasun 8 min read

To take full-page screenshots with Puppeteer or Playwright, you only need to set the fullPage parameter to `true` when taking a screenshot. But there are so many caveats that you may assume by default that the full-page screenshot feature does not work in Puppeteer.

If you don’t have time to read, proceed to the “fix most issues at once” section.

The problem is so painful and common that one of the most common questions you see in Google search results is “Puppeteer full page screenshot not working”. While it is not grammatically correct, it is valid literally.

Let’s list some issues you may encounter when taking a full-page screenshot with Puppeteer, Playwright, or Selenium:

  • triggering and waiting for lazy load images and other lazy load resources;
  • animations that are appeared and are played only at the time you scroll a page;
  • impossible to take the entire page screenshot of very long sites.

Let’s address them one-by-one by building a sophisticated solution that solves them all at once.

The pitfalls of taking full-page screenshots in Playwright, Selenium, or Puppeteer are the same. But I will use Puppeteer as the most popular library to demonstrate how to solve them.

Wait for lazy load images

Let’s take a full-page screenshot of the Apple site:

const puppeteer = require("puppeteer");
(async () => {
const browser = await puppeteer.launch({ headless: true });
try {
const page = await browser.newPage();
await page.setViewport({ width: 1280, height: 1024 });
await page.goto("https://apple.com/", {
waitUntil: ["load", "domcontentloaded"],
});
await page.screenshot({
type: "jpeg",
path: "screenshot.jpeg",
fullPage: true,
});
} catch (e) {
console.log(e);
} finally {
await browser.close();
}
})();

The site has lazy load images. It means they are not loaded until they are shown in the viewport—you need to scroll and view them to trigger loading. And Puppeteer, Playwright, or Selenium, by design, don’t support triggering the image load when taking a full-page screenshot.

Look at the result of the default behavior:

The Apple website without scroll

But it is relatively easy to fix by scrolling the page to the bottom and then taking a screenshot. Let’s do it and see what happens:

const puppeteer = require("puppeteer");
(async () => {
const browser = await puppeteer.launch({ headless: true });
try {
const page = await browser.newPage();
await page.setViewport({ width: 1280, height: 1024 });
await page.goto("https://railway.app/", {
waitUntil: ["load", "domcontentloaded"],
});
await scroll(page);
await page.screenshot({
type: "png",
path: "screenshot.png",
fullPage: true,
});
} catch (e) {
console.log(e);
} finally {
await browser.close();
}
})();
async function scroll(page) {
return await page.evaluate(async () => {
return await new Promise((resolve, reject) => {
var i = setInterval(() => {
window.scrollBy(0, window.innerHeight);
if (
document.scrollingElement.scrollTop + window.innerHeight >=
document.scrollingElement.scrollHeight
) {
window.scrollTo(0, 0);
clearInterval(i);
resolve();
}
}, 100);
});
});
}

You can check out the result, and it looks promising:

The Apple website after scroll

It works. But it is not the only pitfall you will encounter while taking full-page screenshots.

For example, you might encounter an infinite scroll issue, and how to solve it depends on your needs. Limiting the number of “scrolls” you perform is the easiest way. Or you might use a bit tricky but straightforward logic to detect infinite scroll. If you reach the bottom of the site and the site height suddenly increases, there is probably an endless scroll.

Animations are played only on the page scroll

The approach we applied for lazy load images won’t work for sites with complex animations:

A broken screenshot of the APItoolkit.io website

But the expected result is:

The valid screenshot of the APItoolkit.io website

How can we achieve it? One of the easiest ways is to take site screenshots by sections and then merge all sections. It is pretty easy to do.

We need to install libraries to help us merging images:

Terminal window
npm i jimp merge-img

And then, we are going to scroll the site until the bottom, take a screenshot of each section and then merge them:

const puppeteer = require("puppeteer");
const merge = require("merge-img");
const Jimp = require("jimp");
async function scrollDown(page) {
return await page.evaluate(() => {
window.scrollBy(0, window.innerHeight);
return (
window.scrollY >=
document.documentElement.scrollHeight - window.innerHeight
);
});
}
function wait(milliseconds) {
return new Promise((resolve) => {
setTimeout(resolve, milliseconds);
});
}
(async () => {
const browser = await puppeteer.launch({ headless: true });
try {
const page = await browser.newPage();
await page.goto(`https://apitoolkit.io/`);
const path = "screenshot.png";
const { pages, extraHeight, viewport } = await page.evaluate(() => {
window.scrollTo(0, 0);
const pageHeight = document.documentElement.scrollHeight;
return {
pages: Math.ceil(pageHeight / window.innerHeight),
extraHeight:
(pageHeight % window.innerHeight) * window.devicePixelRatio,
viewport: {
height: window.innerHeight * window.devicePixelRatio,
width: window.innerWidth * window.devicePixelRatio,
},
};
});
const sectionScreenshots = [];
for (let index = 0; index < pages; index += 1) {
// wait until animations are played
await wait(400);
const screenshot = await page.screenshot({
type: "png",
captureBeyondViewport: false,
});
sectionScreenshots.push(screenshot);
await scrollDown(page);
}
if (pages === 1) {
const screenshot = await Jimp.read(sectionScreenshots[0]);
screenshot.write(path);
return screenshot;
}
if (extraHeight > 0) {
const cropped = await Jimp.read(sectionScreenshots.pop())
.then((image) =>
image.crop(
0,
viewport.height - extraHeight,
viewport.width,
extraHeight
)
)
.then((image) => image.getBufferAsync(Jimp.AUTO));
sectionScreenshots.push(cropped);
}
const result = await merge(sectionScreenshots, { direction: true });
await new Promise((resolve) => {
result.write(path, () => {
resolve();
});
});
} catch (e) {
console.log(e);
} finally {
await browser.close();
}
})();

Let’s test it:

The valid screenshot of the APItoolkit.io website

And it works fine.

I set captureBeyondViewport to false to ensure that only the elements shown in the viewport are rendered. Otherwise, some elements might be stretched beyond the viewport and break screenshotting.

Taking the entire page screenshot of very long sites

Puppeteer or Playwright might stop taking full-page screenshots when the site is very long. In that case, the same idea used for the animation issue of taking screenshots by sections and merging them can help to handle the problem.

Fixing the most issues at once

Taking screenshots by sections is the best solution we might have today for taking the complete page screenshots:

const puppeteer = require("puppeteer");
const merge = require("merge-img");
const Jimp = require("jimp");
async function scrollDown(page) {
return await page.evaluate(() => {
window.scrollBy(0, window.innerHeight);
return (
window.scrollY >=
document.documentElement.scrollHeight - window.innerHeight
);
});
}
function wait(milliseconds) {
return new Promise((resolve) => {
setTimeout(resolve, milliseconds);
});
}
(async () => {
const browser = await puppeteer.launch({ headless: true });
try {
const page = await browser.newPage();
await page.goto(`https://apitoolkit.io/`);
const path = "screenshot.png";
const { pages, extraHeight, viewport } = await page.evaluate(() => {
window.scrollTo(0, 0);
const pageHeight = document.documentElement.scrollHeight;
return {
pages: Math.ceil(pageHeight / window.innerHeight),
extraHeight:
(pageHeight % window.innerHeight) * window.devicePixelRatio,
viewport: {
height: window.innerHeight * window.devicePixelRatio,
width: window.innerWidth * window.devicePixelRatio,
},
};
});
const sectionScreenshots = [];
for (let index = 0; index < pages; index += 1) {
// wait until animations are played
await wait(400);
const screenshot = await page.screenshot({
type: "png",
captureBeyondViewport: false,
});
sectionScreenshots.push(screenshot);
await scrollDown(page);
}
if (pages === 1) {
const screenshot = await Jimp.read(sectionScreenshots[0]);
screenshot.write(path);
return screenshot;
}
if (extraHeight > 0) {
const cropped = await Jimp.read(sectionScreenshots.pop())
.then((image) =>
image.crop(
0,
viewport.height - extraHeight,
viewport.width,
extraHeight
)
)
.then((image) => image.getBufferAsync(Jimp.AUTO));
sectionScreenshots.push(cropped);
}
const result = await merge(sectionScreenshots, { direction: true });
await new Promise((resolve) => {
result.write(path, () => {
resolve();
});
});
} catch (e) {
console.log(e);
} finally {
await browser.close();
}
})();

Let’s test the code for a different site that we haven’t checked before and ensure it works. There is a screenshot for Stripe:

The valid screenshot of the stripe.com website

Yes, it works. But from time to time, you may still encounter different issues. Each site is different, and they might add custom code that might break scrolling or other tricks that will break your logic. So, you need to constantly update the code and ensure it works and supports more and more sites.

Home exercise

Want to improve the algorithm and practice full page screenshot taking? Try to take a screenshot of the Tesla website:

The problem is that you scroll the site’s background, and there is no full page. Try to do it by yourself.

Alternatives

So, what alternative is to render full-page screenshots when Puppeteer, Playwright, or Selenium can’t render them suitable?

Easy peasy. You can use a screenshot API as a service. Many good screenshot APIs might already solve your problem. As a maker of the such screenshot as a service API, I will show you how easy it is to use and the benefits you will receive as a bonus.

Sign up to get an access key and then take a screenshot as easy as sending an HTTP request:

https://api.screenshotone.com/take?url=https://apple.com&full_page=true

As you see the lazy load images are handled correctly:

The Apple website screenshot taken by the ScreenshotOne API

Let’s test the problem with rendering animations by scrolling in and out of them:

https://api.screenshotone.com/take?url=https://apitoolkit.io&full_page=true

It works:

The valid screenshot of the APItoolkit.io website taken by the ScreenshotOne API

So, why waste time fixing all the problems with Puppeteer and not just using a simple API, especially when you can start for free?

Summary

In case you have time and energy to invest and to build screenshotting or HTML rendering infrastructure. I would go with Puppeetter or Playwright for full-page screenshots. Otherwise, you can save time and money by starting free with a screenshot API.

Additional resources

Have a nice day 👋 and you also might find helpful 👉 the complete guide on how to take a screenshot with Puppeteer