Introduction to AI in Biomedical and Biotechnology

Introduction to AI in Biomedical and Biotechnology

R. K. Chaurasia, Vaibhav Maheswari, A. K. Saini
Copyright: © 2024 |Pages: 20
DOI: 10.4018/979-8-3693-3629-8.ch002
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The infusion of biomedical and bio-technology is gaining high visibility as an asset for estimating various health problems at a fast pace as well as making it less expensive than the earlier methodologies. Despite the challenges, AI is very helpful in the near future in many ways such as early detection and diagnoses of a disease, providing more effective and personalized treatment options, reducing the healthcare cost, and improving the resource allocation. AI algorithms are also being used to analyze x-rays, CT scans, and other images to detect disease earlier with a great accuracy which leads to improved health outcomes of the patient. AI also analyzes massive genomic data to recognize infected genes, predict infection risk, and develop personalised therapies. By this, it can be concluded that the infusion of AI in biomedical and biotechnical has pushed healthcare into a transformative era.
Chapter Preview
Top

Introduction

The merging of Artificial Intelligence (AI) and Machine Learning (ML) is a significant step towards a revolutionary era in biomedical and biotechnological researches with the potential to completely revolutionaries the ways by which the treatments are discovered and developed. This merger has the potential to dramatically change the pharmaceutical industry by streamlining the traditionally laborious and time-consuming processes involved in bringing the innovative prescription drugs to market. It takes 10 to 15 years and an average pre-tax cost of approximately USD 2.6 billion to identify a therapeutic target and advance it through clinical development (Sreelakshmi et al,2004). Even with these substantial efforts the success rate for new pharmaceutical approvals remains appallingly low with only 13% of novel tiny molecules making it to clinical realization. This depressing truth emphasizes how urgently innovative solutions are needed to raise the calibre and efficiency of drug development and research.

Figure 1.

Conceptual connections between artificial intelligence, machine learning, and deep learning for drug development

979-8-3693-3629-8.ch002.f01

The pharmaceutical sector has fundamentally changed as a result of the advancements of AI and ML-driven computer-aided drug creation technologies. These computational techniques offer a systematic theoretical evaluation of molecular properties such as bioactivity, pharmacokinetic factors, selectivity, side effects and physicochemical properties. Computational technologies that generate ideal compounds with desired features in silico have the potential to significantly reduce the failure rates of preclinical lead molecules. Further improvements to the drug development process can be made by employing multi-objective optimization techniques which ensure an improved and targeted route to clinical trials.

Figure 2.

Overview of AI and ML tools used in drug development and discovery

979-8-3693-3629-8.ch002.f02
Figure 3.

Links between AI, ML, and DL for healthcare

979-8-3693-3629-8.ch002.f03

The fundamental idea behind this technological (Blanco et al,2023) revolution is the use of artificial intelligence to assess, learn and analyze massive amounts of pharmaceutical data. Through the use of advancement in AI the AI-driven software applications can identify novel pharmaceutical substances through a highly automated and integrated procedure. Compared to traditional methods which rely on the empirical understanding of complex physiochemical principles, machine learning approaches focus a larger emphasis on converting vast amounts of biological data into insightful and valuable knowledge. This data-driven computational process is made possible by a variety of machine learning approaches including Support Vector Machine (SVM), Random Forest, k Nearest Neighbour (kNN), Logistic Regression, Naïve Bayesian Classification and Deep Learning methods.

Key Terms in this Chapter

Convolutional Neural Networks (CNNs): One type of deep neural network that is particularly useful for evaluating visual imagery is called a convolutional neural network, or CNN. They are made up of several layers of neurons arranged into three primary categories: completely linked layers, pooling layers, and convolutional layers. One family of deep neural networks called convolutional neural networks (CNNs) is mostly used for the analysis of visual imagery. They are made up of several layers of neurons arranged into three primary categories: completely linked layers, pooling layers, and convolutional layers.

Genomic Data: The entirety of the genetic information contained in an organism's DNA (deoxyribonucleic acid) is referred to as genomic data. Numerous experimental methods, such as chromatin immunoprecipitation (ChIP), microarray analysis, DNA sequencing, and high-throughput sequencing technologies like RNA sequencing (RNA-seq) and chromatin immunoprecipitation sequencing (ChIP-seq), are used to collect genomic data.

Bioinformatics: In order to analyze and interpret biological data, especially at the molecular level, bioinformatics is an interdisciplinary field that brings together computer science, statistics, mathematics, and biology. It entails the creation and use of software tools, algorithms, and computational techniques to comprehend biological processes, evaluate enormous datasets, and generate insightful predictions.

Autoencoders: Specifically utilized in the fields of deep learning and representation learning, autoencoders are a kind of artificial neural network used for unsupervised learning. Their purpose is to learn effective data representations by first encoding the input into a latent space with a smaller dimension, and then decoding it back into the original input space.

Natural Language Processing (NLP): A subfield of artificial intelligence (AI) called natural language processing (NLP) is concerned with how people and computers communicate using natural language. It includes the creation of methods and algorithms that allow computers to meaningfully comprehend, interpret, produce, and react to human language. It is essential for computers to be able to comprehend and process human language, which makes it easier for people and robots to communicate and interact.

Machine Learning: It is a division of Artificial Intelligence that focuses on building systems that can learn and improve performance based on the data consumed by them.

Pharmaceutical Data: Information about creation, manufacturing, distribution and use of pharmaceutical products.

Dimensionality Reduction: The technique known as “dimensionality reduction” is used in machine learning and data analysis to minimize the number of variables, or dimensions, in a dataset while maintaining the crucial information. Reducing computing complexity and simplifying the dataset by representing it in a lower-dimensional space facilitates visualization, analysis, and interpretation. This is the main objective of dimensionality reduction. Nonetheless, it's crucial to pay close attention to how dimensionality reduction affects the downstream processes' performance and make sure that crucial data is kept safe throughout.

Complete Chapter List

Search this Book:
Reset