Glyph WidgetsGlyph Widgets
ToolsAboutContactBlogPrivacyTermsRemove AdsSupport on Ko-fi

© 2026 Glyph Widgets LLC. All rights reserved.

·

100% Client-Side Processing

Back to Blog

Page SEO Analyzer: Free On-Page SEO Audit Tool

Paste a page's HTML and get a scored audit covering meta tags, headings, canonical, Open Graph, Twitter Card, JSON-LD, and accessibility signals.

Glyph Widgets
May 5, 2026
11 min read
page seo analyzeron-page seo auditmeta tag checkerhtml seo auditopen graph checkerjson-ld validator

What Is the Page SEO Analyzer?

The Page SEO Analyzer parses a page's HTML and reports the things that actually move on-page rankings: title and description length, heading hierarchy, canonical URL, Open Graph and Twitter Card coverage, JSON-LD structured data, and a long list of technical signals like charset, hreflang, render-blocking scripts, and image alt text. I reach for this tool when a client sends me a single template they want audited but I don't have access to their CMS — paste the rendered HTML once, get a numbered fix list back. Errors, warnings, and informational findings are color-coded and sorted by severity, and a five-tab breakdown surfaces every signal the parser found so I can verify what's there as easily as what isn't.

Key Features

  • SEO score with category breakdown — every analysis produces a 0-100 overall score, plus per-category subscores for crawlability, content, social, structured data, and accessibility, so you can see where the page is leaking points.
  • Meta tag audit — extracts and validates title (length plus pixel-width estimate against Google's ~600px desktop cap), description, canonical, robots, viewport, charset, and <html lang>, calling out missing or oversized fields.
  • Heading hierarchy view — lists every H1 through H6 in document order with indentation, flags missing H1, multiple H1s, and skipped levels (for example H2 jumping straight to H4).
  • Open Graph and Twitter Card coverage — checks og:title, og:description, og:image, og:url, og:type, og:site_name, plus all four twitter:* fields, and warns when og:url and <link rel="canonical"> disagree.
  • JSON-LD parsing and validation — extracts every <script type="application/ld+json"> block, parses it, counts unparseable blocks separately, and shows the resolved @type on each schema.
  • Image alt-text and dimension checks — counts images missing alt, with empty alt, with filename-as-alt (IMG_1234.jpg), missing explicit width and height (a Cumulative Layout Shift signal), and below-fold images that aren't using loading="lazy".
  • Link audit — internal vs external counts, generic anchor-text detection (click here, read more, learn more), in-page fragment links pointing at IDs that don't exist, and target="_blank" without rel="noopener".
  • Tech tab — at-a-glance status for canonical, charset, viewport, html lang, robots, favicon, hreflang count, main landmark, preload hints, render-blocking scripts, deprecated tags (<center>, <font>, <marquee>...), and base href.
  • Browser-side parsing — uses the native DOMParser to walk the HTML; no upload, no external API.

How to Use the Page SEO Analyzer

Step 1: Paste Your HTML

The single input on the page is a textarea labeled "HTML Input". Paste the full HTML source of one page — typically what you get from your browser's "View Source" (Ctrl+U / Cmd+Option+U), the response body of a curl -L request, or the rendered HTML you've exported from your framework. The 2 MB upper bound is enforced; pages above that will trip an alert before parsing because DOMParser on the main thread starts to stutter.

Step 2: Click Analyze

Press the Analyze button below the textarea. Parsing runs in your browser; results appear immediately. If the textarea is empty, the tool toasts "Please enter HTML to analyze" instead of failing silently.

Step 3: Read the Score Summary

A 4-card row appears at the top: the overall Score (green at 80+, yellow at 50-79, red below 50), Errors, Warnings, and the count of Schemas found. Below the cards, the Issues list shows every finding sorted by severity — errors first (red XCircle), then warnings (yellow AlertTriangle), then informational notes (blue CheckCircle). Each issue has an i18n message with the relevant numbers filled in (for example "8 of 12 images missing alt text").

Step 4: Drill into the Tabs

Five tabs sit below the issues:

  • Meta — title with character count and an "Optimal" badge between 50-60 chars, description with the same badge between 150-160 chars, plus canonical and robots.
  • Headings — every heading in document order with an H1-H6 badge and the heading text. Indentation visualizes nesting depth.
  • Social — Open Graph and Twitter Card cards side-by-side with each field filled in or marked "Not set".
  • Schema — every JSON-LD block with its @type and a syntax-highlighted JSON dump.
  • Tech — 16 small status cards covering charset, viewport, hreflang, main landmark, preload hints, render-blocking scripts, images missing dimensions, lazy-load coverage, deprecated tags, internal/external link counts, and base href.

Step 5: Fix and Re-analyze

Make fixes in your source, paste the new HTML, click Analyze again. Each run records a history entry summarizing errors, warnings, and heading count, so you can compare two runs across the same template. Supporters can also save labeled snapshots, generate a PDF report, and pull translated fix snippets for the most common findings.

Practical Examples

Auditing a Marketing Landing Page

A landing page is built with <title>Welcome to ProductX — The All-in-One Customer Platform</title> (74 characters, well over the 60-char warning line) and no <meta name="description">. Pasting the HTML returns: a titleTooLong warning, a missingDescription error, and a titlePixelTooLong warning because the title is also wider than Google's ~600px desktop cap. Trim the title to roughly 50-60 characters, write a 150-160 character description, re-run, and the Meta tab shows green "Optimal" badges next to both lengths.

Validating Article Structured Data Before Publish

A blog post template adds an Article JSON-LD block. Paste the rendered HTML; the Schema tab shows Schema #1 (Article) with the full block pretty-printed. If a comma is missing, the issues list shows invalidJsonLd with the count and the schema tab won't list that block — quick verification before push. The Open Graph card alongside it also shows whether og:type is article (Facebook expects this for article posts).

Spotting Accessibility and Performance Smells

A page passes Lighthouse on a quick scan but the Tech tab is more thorough. Render-blocking scripts shows 4 in yellow because four <script src> tags in <head> lack async or defer. Images missing dimensions shows 12 because the team hasn't been setting width and height. Lazy images shows 1 / 14 — only one of the eligible below-fold images uses loading="lazy". Each of these is a small fix that compounds across templates.

Tips and Best Practices

Paste rendered HTML, not source HTML. Modern frameworks render meta tags, headings, and JSON-LD via JavaScript. If you paste pre-render template source, the analyzer sees an empty <head> and reports a long list of false-positive errors. Use your browser's DevTools "Copy outerHTML" on the <html> element, or curl the URL with -L to follow redirects.

Fix errors first, then warnings, then info. The issue list is already severity-sorted. Errors (red) block crawl/index — noindexDetected, missingViewport, invalidJsonLd. Warnings (yellow) are real misses — title length, missing alt, missing canonical. Info (blue) are nudges — metaKeywordsDeprecated, genericAnchorText, paginationLinks. Don't chase a perfect score by silencing info findings; they're signals, not errors.

Compare two snapshots of the same template. Run the analyzer before and after a fix and watch the Errors and Warnings counts change. The history panel below the tool keeps the last few runs as a quick diff source. For larger audits, supporter snapshots let you label runs and restore the HTML later.

Trust the pixel-width check, not just the character count. Two 60-character titles can render at different pixel widths because of letter shapes — "WWW Information Initiative" is wider than "lily lily lily lily lily i" at the same character count. Google's SERP truncation is pixel-based at roughly 600px. The titlePixelTooLong warning catches titles that fit the character budget but still get cut off.

Common Issues and Troubleshooting

"Please enter HTML to analyze" — the textarea is empty or only whitespace. Paste actual HTML (it doesn't have to start with <!DOCTYPE html>; partial fragments parse fine, but missing <head> will surface most fields as "not set").

"HTML too large (max 2 MB)" — the input exceeds 2 MB. Trim to the section you care about (typically just <head> plus the body skeleton is enough for most checks), or save the HTML to a file and use a local script. The 2 MB cap exists because DOMParser on the main thread can stutter or freeze the tab on larger inputs.

"Failed to parse HTML" — the parser couldn't make sense of the input. This usually means the input isn't HTML at all (you pasted JSON or a URL by accident). Confirm the input starts with < and contains tag syntax.

Score is lower than expected on a known-good page. Check the Tech tab. Many points come off informational findings that don't show in the issues list summary — missing favicon, no preload hints on a content-rich page, generic anchor text in your nav. None of these are blockers, but they all subtract from the score.

JSON-LD count is one less than expected. That schema block has a JSON syntax error and shows up in the issues list as invalidJsonLd rather than the schema tab. Copy the block into the JSON Formatter to find the missing comma or unmatched brace.

Hreflang count looks right but hreflangInvalid is firing. Hreflang values must be BCP-47 — en, en-US, pt-BR, or the special x-default. The tool flags codes that don't match ^([a-z]{2,3}(-[A-Za-z0-9]+)*|x-default)$ (case-insensitive). Common offenders: en_US with an underscore, english, or trailing whitespace.

Privacy and Security

The Page SEO Analyzer parses HTML in your browser using the native DOMParser. The HTML you paste does not leave your device, and there is no server endpoint that receives it. This matters for staging and unreleased pages — you can audit a draft template that's still behind authentication without exposing it to a third-party crawler. Once the tool's JavaScript has loaded, the analysis itself runs offline.

Frequently Asked Questions

How does the tool decide what counts as a serious issue versus a nudge?

Three severity tiers map to the W3C/Google guidance for each signal. Errors are blockers — noindex directives, missing <meta viewport>, parse failures in JSON-LD, missing H1 on a content page. Warnings are real misses that hurt SERP appearance or accessibility but don't block indexing — title or description outside the recommended length, canonical mismatch with og:url, target="_blank" without rel="noopener". Info findings are quality nudges — generic anchor text, missing favicon, deprecated tags. Severity is set in the issue generation step in the source, not configurable per run.

Can the analyzer fetch a URL for me?

Not currently. The textarea accepts pasted HTML only, because in-browser fetching against arbitrary origins is blocked by CORS and would require a server proxy. If you need to audit a remote URL, run curl -L -A "Mozilla/5.0" https://example.com > page.html from a terminal and paste the file contents.

Why does my title-length check pass but titlePixelTooLong still fires?

Because Google truncates titles at roughly 600 pixels at the standard 20px Arial render, not at 60 characters. Wide letters (M, W, capital letters generally, em-dashes) push the pixel count over the cap even when the character count looks fine. The pixel estimate uses canvas measureText against 20px Arial,Helvetica,sans-serif — a close approximation of what Google uses on desktop SERPs.

What does the score actually mean?

The score is a weighted sum across five categories: crawlability (can search engines index this), content (title, description, headings), social (Open Graph, Twitter Card), structured data (JSON-LD presence and validity), and accessibility (alt text, lang, landmarks). Each issue subtracts a fixed number of points based on severity. A score of 80+ indicates the major signals are present and well-formed; 50-79 means real misses; below 50 means at least one error-tier blocker is firing. Treat the score as a directional summary, not an absolute grade.

Does the tool check anything that requires running the page?

No — every check is static parse-time. Things that require execution or network (Core Web Vitals timing, server response codes, robots.txt fetch, sitemap reachability) are out of scope. For those, use the Coming Soon: Broken Link Checker for crawl-time link validation and the Coming Soon: Sitemap Tools for sitemap diagnostics.

Can I audit a page protected by login?

Yes, and that's a reason to use this tool. Open the page while logged in, copy the rendered HTML from DevTools (right-click <html> → Copy → Copy outerHTML), paste, and analyze. Nothing leaves your browser, so authenticated pages and unreleased templates are safe to test.

Related Tools

  • Coming Soon: Meta Tag Generator — generate the title, description, Open Graph, and Twitter Card tags this analyzer checks for.
  • Coming Soon: Schema Generator — build valid Article, FAQ, Product, and Organization JSON-LD without hand-writing the JSON.
  • Coming Soon: SERP Preview — see how your title and description will look on Google before you ship the change.
  • Coming Soon: Accessibility Suite — deeper accessibility audit covering ARIA, contrast, keyboard navigation, and form labels.
  • JSON Formatter — debug invalidJsonLd warnings by re-formatting and validating the offending block.

Try Page SEO Analyzer now: Coming Soon: Page SEO Analyzer

Last updated: May 5, 2026

Keep Reading

More ArticlesTry Page SEO Analyzer