Efficient data processing and sharing are essential functions across healthcare, from patient care, clinical research and health service planning to government billing and reporting for funding and research. But most of the data the industry deals with is human-generated, meaning it’s messy and riddled with errors, and often manually entered from Excel spreadsheets into various technology platforms and systems. disparate. In some cases, our health system operates on infrastructure that is over 20 years old.
There are many cases in the system that are administratively cumbersome, taking crucial resources away from patients. For example, a single U.S. national insurer I worked with had more than 500 employees manually entering provider and facility data needed by its members to find doctors covered by their insurance. However, the average accuracy of this manually entered information was less than 60%. The time it takes to manually enter and cleanse this type of data is a clear factor in the $1 trillion in annual administrative costs that plague healthcare every year.
Automation sounds like a great solution – for this use case and others – but not all automation solutions are created equal. Solutions that work for other industries are not equipped to handle distinctive healthcare datasets. Effectively streamlining the handling of messy and inconsistent healthcare data is extremely difficult, which is why we’ve seen it so difficult to learn from healthcare data stories, let alone resolve industry interoperability issues. The result? Healthcare actors are stuck with highly inefficient systems that impede care.
Fortunately, new automation approaches specifically designed to work with healthcare’s unique “messy” data can help reduce unnecessary administrative costs and wasted time, and lead to more efficient, error-free care. and compliance with new legislation, such as the No Surprises Act, which took effect on January 1, 2022 and requires plans to update supplier list data within 48 hours. If we are ever going to meet the challenges plaguing the industry, healthcare leaders must turn to this new era of automation tools specifically designed to learn and understand healthcare data issues.
Here are three considerations for evaluating the ability of a given automation solution to effectively process “messy” health plan data:
- The types of data you have and their quality
There are generally two types of data – internal data and external data – and the type an organization deals with directly correlates to the quality (or “cleanliness”) of that data.
Internal data is created by users interacting with a web platform. For example, consider user data created by retail companies like Amazon, Uber, and DoorDash. These convenient built-in processes are easily deployed and produce clean data that is fully machine-recorded with no missing values, misspellings, or other outliers seen with manually entered data. Internal data behaves well because it is created by a system with a strict data schema. In contrast, external data does not conform to any pattern and is subject to the free will of the creative minds that enter it. In external data, you can find a phone number in the address field or see doctors’ credentials appended to their first or last name. External data is at the forefront of healthcare data issues. It’s the “messy” human-generated data that somehow has to travel through multiple disparate systems, from provider to payer and beyond (think, emojis instead of text, empty cells, etc.).
- Existing knowledge and human processes
Once the quality of the data has been assessed, it is equally important to determine the context in which it is received and ingested in order to set realistic goals for more efficient processing. For example, payers have unique data processing purposes and needs. The volume of provider data received by national health plans is exponentially greater than that of those operating in single markets. While some of these plans have started documenting the data they process, others haven’t even started evaluating their data. It’s important for payers to set these realistic goals for themselves, and for those who are more advanced, they’ll likely comply beyond what’s required by law with no surprises.
What is needed upstream to feed these tools is the codification of the decision-making processes in place at the plan within the framework of the current infrastructure of manual data entry. Knowledge residing in the human brain (or in some organizations, perhaps formal manuals and documentation) must be taught to the automation solution, which can then replicate human decision-making. This means that the first step to creating the “new” solution is to thoroughly understand the business process of the “old” solution. After all, there’s no point in speeding things up x100 if it’s going to cause errors and fallout for lack of process since you’ll be overwhelmed with x100 errors and fallout too!
There’s a significant amount of upfront work to do here, but the payoff is huge. Once provider list data ingestion is automated, instead of scrolling through 40+ inaccurate entries in their plan’s provider directory, patients can locate the right healthcare professional the first time.
- The appropriate level of collaboration between humans and machines
Healthcare organizations have struggled for years to master human-generated data, without success. Although human-centric data processes are slow and error-prone, full automation is impractical.
To analyze and automate “messy” data, intelligent human-in-the-loop AI solutions (artificial intelligence that leverages both human intelligence and artificial intelligence) are needed. This approach provides perfect middle ground for healthcare plans and providers to use AI and human decision-making to improve data and, in turn, deliver better patient care, health outcomes and medical research.
Human-in-the-loop automation means that automation technology acts as a translator, sitting between existing systems and constantly translating data from one format to another so that it can flow easily without disrupting current systems. As mentioned above, automation solutions need to learn how humans make decisions up front, and they also need to “learn” when to signal a human during the data processing workflow itself. Ultimately, AI isn’t replacing humans in this case — it’s just inserting itself where there are repetitive tasks it can teach — and humans need to stay connected to workflows at points of key decisions.
When it comes to solving its messy data problems, it’s obvious that the healthcare industry still has some way to go. By bringing new automation solutions and approaches to the table, healthcare organizations can dramatically reduce unnecessary costs, enable compliance, and modernize to deliver better patient care and healthcare experience. streamlined for all.
About Bob Lindner, PhD
As Co-Founder and Chief Scientific Officer, Dr. Bob Lindner oversees Veda’s science and research teams. It provides strategic vision, develops innovative technologies, and connects Veda scientists to its Scientific Advisory Board.
Bob fell in love with data science and has a passion for solving big problems. With over ten years of experience, he is a published and acclaimed astrophysicist with expertise in data modeling and the design and construction of cloud-based machine learning systems.
Bob has made a number of important discoveries and “first sightings” over his years of research and study. Most notably, he created machine learning code that automates and accelerates scientists’ ability to analyze data from next-generation telescopes. This program, Gausspy, continues to increase scientists’ understanding of the origins of our galaxy. He got his doctorate. in physics from Rutgers University, and was a postdoctoral fellow at UW-Madison, where he led the development of Gausspy.