Like it or not, it's happening!
2023 is the year of #DeepLearning in Healthcare.
🚨New editorial @natBME on how Graph Neural Nets & Transformers are shaping Computational Medicine via contextual learning.
Editorial written part by editors, part by #chatGPT🤫
Main takeaways👇
This editorial highlights some of the most recent articles published in @natBME on #DeepLearning from medical images.
It discusses specifically how contextual information can substantially improve model performance and clinical interpretability.
nature.com/articles/s41551-022-00997-w
Images are very popular in medicine.
They are used all the time for a very wide variety of tasks.
Images are particularly relevant in #oncology, as the most important tool in diagnosing & tracking disease.
Traditionally, humans need to interpret medical images.
But humans are humans, and they sometimes make "human mistakes".
Also, the amount of images a human can process is constant, hence the total amount of processed images scales linearly with the number of folks interpreting.
An #AI algorithm, on the other hand, can scale much, much more efficiently.
Even if it might lack the depth and complexity in interpretation that for example an experienced pathologist has, it can still streamline the entire process by reducing cost & assisting medical experts.
Here is another example of a #GNN with biotech applications: a tool for dynamic time-dependent data, such as:
- series of images from different progression points in time
- temporal snapshots of protein-protein interaction networks or gene-expression networks.
twitter.com/simocristea/status/1597294880027705344?s=20&t=iK2hkd0Ot6YqmLSwR0PS_g
On a more molecular level, the editorial briefly discusses a very nice new paper by @james_y_zou, which trains GNNs on spatial protein profiles from multiplexed immunoflorescence.
The GNN can extract clinically relevant features from the tumor microenvironment of cancer patients
When applied on tissues from patients with head-and-neck and colorectal cancers, the model identifies spatial motifs associated with cancer recurrence and patient survival following treatment.
nature.com/articles/s41551-022-00951-w
Another important study shows how enhancing Graph Neural Network models with histopathological features from the tumor microenvironment extracted from whole-slide images can better predict cancer prognosis in kidney, breast, lung and uterine cancers.
nature.com/articles/s41551-022-00923-0
Treating histological tissues as graphs, with image patches as nodes, assures that different regions of the same whole-slide image are inter-connected & dependencies can be explicitly modeled (e.g with GNNs).
This opens up lots of opportunities vs. treating patches independently
However, such a strategy is also computationally expensive.
The paper addresses this explicitly by implementing a patch-aggregation strategy with edge-level attention.
A summary of this paper is also presented in this News & Views piece
nature.com/articles/s41551-022-00924-z
2️⃣Transformer models (e.g. #chatGPT)
Transformers learn the meaning of words and the structure of language via contextual clues.
One of the main advantages of such models is that they are self-supervised & don't need explicit annotations in the training data.
Extrapolating, @pranavrajpurkar & @AndrewYNg use Transformers to identify pathologies in unlabelled chest Xray images.
As contextual clues (text Transformer), a vision Transformer uses learned pathology features from the raw radiology report associated with each Xray image.
The raw radiology reports then act as a natural source of supervision.
The performance of the self-supervised model is comparable to that of radiologists.
That's quite impressive.
nature.com/articles/s41551-022-00936-9
On a different note, this Review paper highlights something very important: how openly releasing pre-trained models changes the paradigm from model building to model deployment.
I believe this is an important conceptual shift for Computational Medicine.
nature.com/articles/s41551-022-00898-y
Indeed, such large self-supervised #DeepLearning models need lots of training data.
Getting so much data is usually prohibitive.
Even more, training on so much data is not feasible computationally.
Access to pre-trained models changes this paradigm by increasing accessibility.
To wrap up: self-supervised #DeepLearning models are already very prolific in their applications to Computational Medicine.
Context Learning makes them much better.
They are not perfect, but they need not be.
There’s immense opportunity for improving & extending such models.
In my view, one very exciting, and still under-explored area, is the application of such models to DNA sequencing data.
In particular, decoding broken DNA such as the one from cancer patients, has the potential to be very insightful.
2023 will certainly be big on this!