Early practice management systems (PMS) were born out of the need to provide better continuity of care for patients and more efficient operations for a GP/practice, both of which contribute to the improved quality and safety of care delivery. They have created significant and perhaps intangible benefits to individuals and society over the last several decades.
The new technology enablers, however, make it possible to transform PMS into more federated, better connected, and evidence-based systems by leveraging the emerging interoperability standards and Artificial Intelligence (AI) technologies. This will make the new generation of PMS even more central to primary care systems within the overall healthcare care continuum.
This will enable future generations of Best Practice Software to bring many new benefits to patients, practitioners, and the community at large – contributing to a ‘more sophisticated and connected community healthcare management’, as mentioned in a recent Wild Health article.
The technology enablers include web-based and cloud infrastructure, now being used as the basis for the next generation of Best Practice Software, referred to internally as Titanium.
When used in conjunction with new interoperability standards such as HL7 FHIR®, cloud technology adds new mechanisms to the way various parties in the delivery of healthcare are connected, including support for patient engagement.
Through the cloud, AI solutions can be built leveraging huge amounts of data created by clinicians, including as part of collaboration with other clinicians, and in some cases, generated by medical devices. Such solutions can provide new insights to the clinicians and support new models of clinician-patient collaborations, with added emphasis on preventative and personalized health.
The Added Value of Interoperability
Architecting for interoperability adds dynamic and evolvable aspects to the way health systems of the future are connected, typically using APIs over cloud. This allows constructing and managing flexible event-driven clinical workflows supporting multiple participants, including hospitals, Aged Care facilities, community health centres, and patients. This is not currently possible using HL7 v2 messaging integration approaches.
The emerging HL7 FHIR® standard provides a common information model for representing digital health data (the so called FHIR Resource entities) and API interfaces, both of which support building interoperable and connected digital health systems, and many international vendors are now embracing it. In some cases, this is in response to regulatory requirements, such as the US Office of National Coordinator (ONC) cure act Final Rule. This rule was designed to give patients and their healthcare providers secure access to health information. It also aims to increase innovation and competition by fostering an ecosystem of applications to provide patients with more choices in their healthcare, in part through the standardized API interfaces.
Best Practice Software recognizes the many benefits that the FHIR® standard can bring in the context of cloud technologies and is currently establishing a long term FHIR® adoption roadmap as part of its strategic direction.
The Added Value of AI
In general, AI is a collection of interrelated technologies used to solve problems autonomously and perform tasks to achieve defined objectives without explicit guidance from a human being. AI adds value through automating many tasks typically involving human actions and decision making.
Examples of AI use in healthcare are in the interpretation of medical images, e.g., X-rays and MRI scans, in the personalized treatment of patients based on their medical history and genetics, and in the optimization of clinical workflows.
A key component of AI is machine learning (ML), whereby computers ‘learn’ without being explicitly programmed, making use of the large amount of clinical data collected over time (aka training data) and applying advanced computational reasoning techniques. This can be in the form of:
- statistical machine learning searching for a predictive function from the training data
- reinforcement learning approaches constructing AI algorithms with “rewards” or “penalties” based on their problem-solving performance, inspired by control theory approaches
- deep learning solutions based on the use of artificial neural networks.
Other AI applications are in natural language processing, computer vision, used in many clinical image processing applications, and robotics. Another area of use in health is knowledge representation, particularly used to document clinical knowledge in a computable form such as SNOMED-CT clinical terminology.
Many rule-based Clinical Decision Support (CDS) systems can also be regarded as a form of AI. Best Practice Software has included since its initial release CDSs aimed at helping clinicians to provide safer and more personalized healthcare. For example, when prescribing, background checks are made for potential allergies, drug interactions, contra-indications etc. The use of new AI approaches can add another level to CDS, leveraging data-based solutions, contributing to better evidence-based healthcare provision.
Best Practice Software is currently looking at AI technologies for its future products to advance the creation of learning health systems for primary health providers as part of connected health ecosystems. The aim is to support more effective, evidence-based, and personalized clinical care and adaptable clinical workflows, as well as more efficient administrative operations of practices, based on the large volumes of historic data that has been collected. Possibilities include analysis of previous investigations of patients to support predictive clinical actions, text mining of correspondence with specialists, hospitals, and other clinicians, to help better decision making in case of similar future symptoms and so on.
While interoperability delivers more connected and event-driven care, analytics and AI provide augmented decision making for clinicians.
Establishing Trust for Providers and Consumers - Guidance for Developers
An important consideration when discussing AI technologies is to ensure that clinicians trust the decisions that are made as a result of the use of the AI system. This is often referred to as an explainability problem, which requires mechanisms to support clinicians in understanding how AI systems make decisions.
There is a further element of trust, whereby that learning health systems need to ensure that personal and societal confidence in IT systems is preserved in the presence of the data proliferation and sharing. To this end special care needs to be taken to express rules related to privacy, policy and ethics. These concerns were discussed at more length in the paper delivered by Best Practice Software at the recent AI in Healthcare workshop in Oct 2021, and highlighted next.
One way to create trust is to develop “explainable” AI, where developers can present the underlying basis for decision-making in a way that is understandable to humans and can demonstrate that the system is working as expected by clinicians.
Another part of the guidance for developers is related to the problem of expressing computable expressions of policies, such as obligations, permissions, accountability, responsibility, and delegation. These expressions can be implemented in code as part of any digital health application, including the AI solutions. For example, they can be used to encode rules associated with privacy consent, governing the rules of access to personal healthcare information, or with research consent, governing the rules of clinical research.
Computable expressions of policies are also important when one needs to express responsibilities associated with passing of healthcare data between providers, taking into account various legal constraints such as data ownership or custodianship or regulatory constraints associated with privacy.
AI brings its own set of policy issues such as how one can go about specifying ‘responsibility’ of AI applications, e.g. in the case of safety concerns, is this a responsibility of the AI developer, the IT staff involved in deploying the system or of the users of the system such as the clinicians.
These are issues which are currently yet to be addressed as part of legal systems, but the computable policy framework should be a required prerequisite when building scalable AI in any healthcare organization.
Dr Frank Pyefinch
CEO at Best Practice Software
Dr Zoran Milosevic
Interoperability and AI Consultant at Best Practice Software
The paper presented at the AI in Healthcare Workshop is available upon request. If you would like to obtain a copy, please contact Dr Zoran Milosevic here.