Craft and publish engaging content in an app built for creators.
NEW
Publish anywhere
Post on LinkedIn, Threads, & Mastodon at the same time, in one click.
Make it punchier 👊
Typefully
@typefully
We're launching a Command Bar today with great commands and features.
AI ideas and rewrites
Get suggestions, tweet ideas, and rewrites powered by AI.
Turn your tweets & threads into a social blog
Give your content new life with our beautiful, sharable pages. Make it go viral on other platforms too.
+14
Followers
Powerful analytics to grow faster
Easily track your engagement analytics to improve your content and grow faster.
Build in public
Share a recent learning with your followers.
Create engagement
Pose a thought-provoking question.
Never run out of ideas
Get prompts and ideas whenever you write - with examples of popular tweets.
@aaditsh
I think this thread hook could be improved.
@frankdilo
On it 🔥
Share drafts & leave comments
Write with your teammates and get feedback with comments.
NEW
Easlo
@heyeaslo
Reply with "Notion" to get early access to my new template.
Jaga
@kandros5591
Notion 🙏
DM Sent
Create giveaways with Auto-DMs
Send DMs automatically based on engagement with your tweets.
And much more:
Auto-Split Text in Posts
Thread Finisher
Tweet Numbering
Pin Drafts
Connect Multiple Accounts
Automatic Backups
Dark Mode
Keyboard Shortcuts
Creators love Typefully
170,000+ creators and teams chose Typefully to curate their Twitter presence.
Marc Köhlbrugge@marckohlbrugge
Tweeting more with @typefully these days.
🙈 Distraction-free
✍️ Write-only Twitter
🧵 Effortless threads
📈 Actionable metrics
I recommend giving it a shot.
Jurre Houtkamp@jurrehoutkamp
Typefully is fantastic and way too cheap for what you get.
We’ve tried many alternatives at @framer but nothing beats it. If you’re still tweeting from Twitter you’re wasting time.
DHH@dhh
This is my new go-to writing environment for Twitter threads.
They've built something wonderfully simple and distraction free with Typefully 😍
Santiago@svpino
For 24 months, I tried almost a dozen Twitter scheduling tools.
Then I found @typefully, and I've been using it for seven months straight.
When it comes down to the experience of scheduling and long-form content writing, Typefully is in a league of its own.
Luca Rossi ꩜@lucaronin
After trying literally all the major Twitter scheduling tools, I settled with @typefully.
Killer feature to me is the native image editor — unique and super useful 🙏
Visual Theory@visualtheory_
Really impressed by the way @typefully has simplified my Twitter writing + scheduling/publishing experience.
Beautiful user experience.
0 friction.
Simplicity is the ultimate sophistication.
Queue your content in seconds
Write, schedule and boost your tweets - with no need for extra apps.
Schedule with one click
Queue your post with a single click - or pick a time manually.
Pick the perfect time
Time each post to perfection with Typefully's performance analytics.
Boost your content
Retweet and plug your posts for automated engagement.
Start creating a content queue.
Write once, publish everywhere
We natively support multiple platforms, so that you can expand your reach easily.
Check the analytics that matter
Build your audience with insights that make sense.
Writing prompts & personalized post ideas
Break through writer's block with great ideas and suggestions.
Never run out of ideas
Enjoy daily prompts and ideas to inspire your writing.
Use AI for personalized suggestions
Get inspiration from ideas based on your own past tweets.
Flick through topics
Or skim through curated collections of trending tweets for each topic.
Write, edit, and track tweets together
Write and publish with your teammates and friends.
Share your drafts
Brainstorm and bounce ideas with your teammates.
NEW
@aaditsh
I think this thread hook could be improved.
@frankdilo
On it 🔥
Add comments
Get feedback from coworkers before you hit publish.
Read, Write, Publish
Read, WriteRead
Control user access
Decide who can view, edit, or publish your drafts.
Like it or not, it's happening!
2023 is the year of #DeepLearning in Healthcare.
🚨New editorial @natBME on how Graph Neural Nets & Transformers are shaping Computational Medicine via contextual learning.
Editorial written part by editors, part by #chatGPT🤫
Main takeaways👇
This editorial highlights some of the most recent articles published in @natBME on #DeepLearning from medical images.
It discusses specifically how contextual information can substantially improve model performance and clinical interpretability.
nature.com/articles/s41551-022-00997-w
Images are very popular in medicine.
They are used all the time for a very wide variety of tasks.
Images are particularly relevant in #oncology, as the most important tool in diagnosing & tracking disease.
Traditionally, humans need to interpret medical images.
But humans are humans, and they sometimes make "human mistakes".
Also, the amount of images a human can process is constant, hence the total amount of processed images scales linearly with the number of folks interpreting.
An #AI algorithm, on the other hand, can scale much, much more efficiently.
Even if it might lack the depth and complexity in interpretation that for example an experienced pathologist has, it can still streamline the entire process by reducing cost & assisting medical experts.
Here is another example of a #GNN with biotech applications: a tool for dynamic time-dependent data, such as:
- series of images from different progression points in time
- temporal snapshots of protein-protein interaction networks or gene-expression networks.
twitter.com/simocristea/status/1597294880027705344?s=20&t=iK2hkd0Ot6YqmLSwR0PS_g
On a more molecular level, the editorial briefly discusses a very nice new paper by @james_y_zou, which trains GNNs on spatial protein profiles from multiplexed immunoflorescence.
The GNN can extract clinically relevant features from the tumor microenvironment of cancer patients
When applied on tissues from patients with head-and-neck and colorectal cancers, the model identifies spatial motifs associated with cancer recurrence and patient survival following treatment.
nature.com/articles/s41551-022-00951-w
Another important study shows how enhancing Graph Neural Network models with histopathological features from the tumor microenvironment extracted from whole-slide images can better predict cancer prognosis in kidney, breast, lung and uterine cancers.
nature.com/articles/s41551-022-00923-0
Treating histological tissues as graphs, with image patches as nodes, assures that different regions of the same whole-slide image are inter-connected & dependencies can be explicitly modeled (e.g with GNNs).
This opens up lots of opportunities vs. treating patches independently
However, such a strategy is also computationally expensive.
The paper addresses this explicitly by implementing a patch-aggregation strategy with edge-level attention.
A summary of this paper is also presented in this News & Views piece
nature.com/articles/s41551-022-00924-z
2️⃣Transformer models (e.g. #chatGPT)
Transformers learn the meaning of words and the structure of language via contextual clues.
One of the main advantages of such models is that they are self-supervised & don't need explicit annotations in the training data.
Extrapolating, @pranavrajpurkar & @AndrewYNg use Transformers to identify pathologies in unlabelled chest Xray images.
As contextual clues (text Transformer), a vision Transformer uses learned pathology features from the raw radiology report associated with each Xray image.
The raw radiology reports then act as a natural source of supervision.
The performance of the self-supervised model is comparable to that of radiologists.
That's quite impressive.
nature.com/articles/s41551-022-00936-9
On a different note, this Review paper highlights something very important: how openly releasing pre-trained models changes the paradigm from model building to model deployment.
I believe this is an important conceptual shift for Computational Medicine.
nature.com/articles/s41551-022-00898-y
Indeed, such large self-supervised #DeepLearning models need lots of training data.
Getting so much data is usually prohibitive.
Even more, training on so much data is not feasible computationally.
Access to pre-trained models changes this paradigm by increasing accessibility.
To wrap up: self-supervised #DeepLearning models are already very prolific in their applications to Computational Medicine.
Context Learning makes them much better.
They are not perfect, but they need not be.
There’s immense opportunity for improving & extending such models.
In my view, one very exciting, and still under-explored area, is the application of such models to DNA sequencing data.
In particular, decoding broken DNA such as the one from cancer patients, has the potential to be very insightful.
2023 will certainly be big on this!