Countless digital documents hold valuable info, and the AI industry is attempting to set it free.

For years, businesses, governments, and researchers have struggled with a persistent problem: How to extract usable data from Portable Document Format (PDF) files. These digital documents serve as containers for everything from scientific research to government records, but their rigid formats often trap the data inside, making it difficult for machines to read and analyze.

“Part of the problem is that PDFs are a creature of a time when print layout was a big influence on publishing software, and PDFs are more of a ‘print’ product than a digital one,” Derek Willis, a lecturer in Data and Computational Journalism at the University of Maryland, wrote in an email to Ars Technica. “The main issue is that many PDFs are simply pictures of information, which means you need Optical Character Recognition software to turn those pictures into data, especially when the original is old or includes handwriting.”

  • Em Adespoton@lemmy.ca
    link
    fedilink
    arrow-up
    26
    ·
    2 days ago

    This is silly.

    PDF is a Portable Document Format. It replaced Encapsulated Postscript as a document storage medium. It’s not all that different than a more highly structured zip archive, designed to structure how the layout and metadata is stored as well as the content.

    It has a spec, and that spec includes accessibility features.

    The problem is that how many people use it is to take a bunch of images of varying quality and place them on virtual letter-sized pages, sometimes interspersed with form fields and scripts.

    A properly formatted accessible PDF is possible these days with tools available on any computer; these are compact, human and machine readable, and rich in searchable metadata.

    Complaining about inaccessible PDFs is sort of like complaining about those people who use Excel as a word processor.

    So, with that out of the way… on to the sales pitch: “use AI to free the data!”

    Well I’m sorry, but most PDF distillers since the 90s have come with OCR software that can extract text from the images and store it in a way that preserves the layout AND the meaning. All that the modern AI is doing is accomplishing old tasks in new ways with the latest buzzwords.

    Remember when PDFs moved “to the cloud?” Or to mobile devices? Remember when someone figured out how to embed blockchain data in them? Use them as NFTs? When they became “web enabled?” “Flash enabled?”

    PDF, as a container file, has ridden all the tech trends and kept going as the convenient place to stuff data of different formats that had to display the same way everywhere.

    It will likely still be around long after the AI hype is long gone.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      11
      ·
      2 days ago

      This is silly.

      Whether it’s “silly” or not is irrelevant, the problem described in the article is real. I have seen innumerable PDFs over the years that were atrocious when it came to the use of those accessibility features, the format’s design factors in to how people use it and people use it terribly. If plain old OCR were enough then this wouldn’t be such a problem.

    • GenderNeutralBro@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      Well I’m sorry, but most PDF distillers since the 90s have come with OCR software that can extract text from the images and store it in a way that preserves the layout AND the meaning

      The accuracy rate of even the best OCR software is far, far too low for a wide array of potential use cases.

      Let’s say I have an archive of a few thousand scientific papers. These are neatly formatted digital documents, not even scanned images (though “scanned images” would be within scope of this task and should not be ignored). Even for that, there’s nothing out there that can produce reliably accurate results. Everything requires painstaking validation and correction if you really care about accuracy.

      Even ArXiv can’t do a perfect job of this. They launched their “beta” HTML converter a couple years ago. Improving accuracy and reliability is an ongoing challenge. And that’s with the help or LaTeX source material! It would naturally be much, much harder if they had to rely solely on the PDFs generated from that LaTeX. See: https://info.arxiv.org/about/accessible_HTML.html

      As for solving this problem with “AI”…uh…well, it’s not like “OCR” and “AI” are mutually exclusive terms. OCR tools have been using neural networks for a very long time already, it just wasn’t a buzzword back then so nobody called it “AI”. However, in the current landscape of “AI” in 2025, “accuracy” is usually just a happy accident. It doesn’t need to be that way, and I’m sure the folks behind commercial and open-source OCR tools are hard at work implementing new technology in a way that Doesn’t Suck.

      I’ve played around with various VL models and they still seem to be in the “proof of concept” phase.

    • cmnybo
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      OCR makes a lot of mistakes, so unless someone bothered to go through and correct them, it’s only really useful for searching for keywords.