How This German Drug Discovery Startup Uses AI

OncoCoin AG is a Switzerland-based company and a wholly-owned subsidiary of the international AI and blockchain-based drug discovery and development company Innoplexus AG. OncoCoin AG is particularly known for initiating the world’s largest pancreatic cancer biomarker study (PALAS study) conducted by pancreatic cancer experts in Europe.
OncoCoin AG also runs a free mobile app for cancer patients called the CURIA app which helps cancer patients identify people with similar symptoms, contact doctors for a second opinion, and identify clinical trials. Additionally, OncoCoin AG has introduced one of the largest decentralized, GDPR-compliant real-world data (RWD) exchanges, dubbed AMRIT tokens, to help patients pay for services such as an independent second opinion, access to digital therapies and more. This utility token allows patients to reclaim ownership and benefit from revenue earned from their data.
In an exclusive interview with Analytics India Magazine, Ashwinkumar Rathod, co-CEO and co-founder of OncoCoin, explains how they incorporate ethics into their products.
AIM: How does OncoCoin (Innoplexus) leverage AI?
Ashwinkumar Rathod: Innoplexus leverages AI in three ways:
- Translate unstructured public and proprietary data into a knowledge graph. By publicly available data, we literally mean anything crawlable on the web. Our proprietary data covers clinical data, i.e. patient records extracted from clinical sites and patient data uploaded by users of our CURIA and NEURIA applications. CURIA meets the needs of patients suffering from oncological diseases, while NEURIA helps patients suffering from currently incurable neurological diseases such as Parkinson’s disease, Alzheimer’s disease or multiple sclerosis.
- In our drug discovery pipelines, AI is another value driver. Our Ontosight™ enterprise search platform accelerates order-of-magnitude biomarker identification and helps pharmaceutical companies compile shorter, more accurate lists of potential drug candidates.
- Finally, we combine our AI capabilities and publicly available data to predict the future success of ongoing clinical trials. Additionally, we are not only able to predict clinical trial outcomes, but we can also recommend new trial designs that improve the odds of a future positive outcome. The AI-powered insights of our three product families establish the core value of AMRIT: a utility token issued by OncoCoin AG, allowing patients to reclaim ownership and benefit from revenue derived from their data.
OBJECTIVE: Elaborate on some of the AI governance methods, techniques or frameworks used within your organization to ensure that your products/solutions provide the best user experience possible.
Ashwinkumar Rathod: The constant validation and monitoring of AI models by a team of medical experts at all stages of their life cycle forms the backbone of our AI governance. Additionally, continuous stakeholder and user feedback drives innovation and development. For example, our award-winning products Ontosight™ and Curia™ are enabled by this open atmosphere of rethinking AI solutions from the user’s perspective. Every model used in all product pipelines is validated with a large number of samples that best represent the data population, ensuring that all cases, including minority data segments, are thoroughly validated with an interval of very high confidence.
AIM: What explains the growing discussions about the ethics, accountability and fairness of AI in recent times? Why is this important?
Ashwinkumar Rathod: The increasing accessibility and demand for data, especially from private companies with diverse purposes, is leading to growing concerns about data privacy. Establishing the authenticity and reproducibility of published results remains a great challenge and therefore poses a serious concern for the ethics and fairness of AI. Ensuring fair and responsible data analysis therefore requires rigorous regulation or innovative solutions. Ultimately, we believe that trust in AI services requires consumers to regain ownership of their private data, as our blockchain solution exemplifies. The need of the hour is to bring stakeholders together to eradicate data silos through a federated ecosystem where everyone can bring their data securely. Furthermore, a data lineage on how models are trained and tested could pave the way for future RCA (root cause analysis) and correct the bias that has been introduced into the data pipeline in a discrete and precise way.
AIM: How does your company ensure compliance with AI governance policies?
Ashwinkumar Rathod: The company has dedicated itself to the agile methodology. We break down each project into several phases. Each phase involves constant collaboration with stakeholders and continuous improvement. Once a project kicks off, the data science team goes through a process of planning, executing, and evaluating. Ongoing collaboration between team members, project stakeholders, and product owners is essential. Improvement is documented and monitored by version control of the software we write and the data we maintain.
When we’ve been developing a dashboard for clinical practitioners for over 15 months, every incremental improvement that has taken us from an insensitive automated script for extracting anonymized data to an interactive dashboard that allows physicians to insight into their patients, similar cases and their patient journeys is well documented. by our project plans and version control systems. Automated periodic training and testing cycles build confidence in published results and incremental improvements as we gather more diverse data sets over time.
AIM: How do you mitigate bias in your AI algorithms?
Ashwinkumar Rathod: Our AI algorithms are constantly evaluated using objective performance metrics at all stages of their lifecycle. Therefore, basing downstream decisions on the best performing algorithms frees them from human judgment and bias. Randomizing the test and training sets for each data segment establishes confidence in the results and flags discrepancies that provide indications of potential bias. Every feature in the final product goes through a well-documented UAT (user acceptance test), involving multiple types of stakeholders, ensuring that the end result is free from bias.
AIM: Do you have a due diligence process to ensure data is collected ethically?
Ashwinkumar Rathod: Public data is retrieved by us from many countries, regardless of variations in the purposes and formats of the original data. We are always striving to extend the scope of the data to provide an accurate and balanced reflection of the state of the art in medical knowledge and diagnosis. This is particularly important when enriching customer data which may be more limited in scope. Model releases also follow release management and lineage to revert discrepancies in the future and resolve the root cause. As a GDPR compliant organization, we respect the privacy of individuals and organizations when it comes to collecting data. Therefore, due diligence occurs not only at the time of obtaining data, but also throughout the pipeline with respect to data aging, storage, updating, and disposal.
AIM: How do you systematically integrate ethical principles related to AI and AI applications into your platform?
Ashwinkumar Rathod: Since our public data crawlers and proprietary data tagger only retrieve information about biomedical entities and no personal data, which is even anonymized in the case of proprietary data, racial bias or gender imbalances do not may not appear in our training datasets. Therefore, AI solutions trained on them do not suffer from the shortcomings known in other AI-based applications, such as credit scoring or automated candidate selection. Additionally, we host one of the largest ontologies of biomedical entities, which ensures AI has near-complete coverage of the factual domain. Additionally, sorting and filtering of data points is made superfluous so that the end user can always adjust the parameters according to their respective use cases.
AIM: How does your company ensure the privacy of consumer data?
Ashwinkumar Rathod: We are GDPR compliant and an ISO-27001 certified organization. The privacy and security of consumer data is of paramount importance at every stage of processing consumer data. Our blockchain platform allows patients to retain ownership of their own data. In general, we and our customers do not store or process private data without the explicit permission of the data owners. Additionally, as explained above, our AI solutions leverage factual information independent of the personal context of the data owner. Our consumers’ private data is encrypted with AES-256-bit encryption, which is also recommended by the likes of the US National Security Advisor (NSA).
AIM: What are your efforts to help brands establish a trusting and transparent relationship with consumers?
Ashwinkumar Rathod: Transparency in how the model works is often a crucial element in building trust with our clients. We believe that the explainability of the model’s predictions goes a long way in helping the non-technical audience understand the inner workings of the model. We provide transparency in a variety of ways, starting with explaining the segmentation of data into train, test, and prediction sets. We make sure that the training, testing and prediction sets have minimal to zero overlaps so that the training bias does not affect the prediction results. As a result, the accuracy figures we publish gain confidence with our customers and, therefore, their downstream customers.