Typefully

Write better tweets.
Grow your audience.

NowΒ with AI and LinkedIn cross-posting.

Select Twitter account first

Join 100,000+ creators

@david_perell
@ev
@dhh
@marckohlbrugge
@maccaw
@warikoo
@adamwathan
Build in public

Share a recent learning with your followers.

Create engagement

Pose a thought-provoking question.

Curated writing prompts to never run out of ideas

Get fresh prompts and ideas whenever you write - with examples of popular tweets.

The best writing experience, with powerful scheduling

Create your content without distractions - where it's easier to write and organize threads.

Avatar
Avatar

Cross-post to LinkedIn

NEW

Automatically add LinkedIn versions to your posts.

+14

Followers

Discover what works with powerful analytics

Easily track your engagement analytics to improve your content and grow faster.

@frankdilo

@frankdilo

Feedback?

@albigiu

@albigiu

Love it πŸ”₯

Collaborate on drafts and leave comments

Write with your teammates and get feedback with comments.

🧡

Rewrite as thread start

πŸ”₯

Make it Punchier

βœ…

Fix Grammar

Improve your content with AI suggestions and rewrites

Get suggestions, tweet ideas, and rewrites powered by AI.

And much more:

Auto-Split Text in Tweets

Thread Finisher

Tweet Numbering

Pin Drafts

Connect Multiple Accounts

Automatic Backups

Dark Mode

Keyboard Shortcuts

Top tweeters loveΒ Typefully

100,000+ creators andΒ teams chose Typefully to curate their Twitter presence. Join them.

Avatar
Santiago@svpino
For 24 months, I tried almost a dozen Twitter scheduling tools. Then I found @typefully, and I've been using it for seven months straight. When it comes down to the experience of scheduling and long-form content writing, Typefully is in a league of its own.
Avatar
Luke Beard πŸ‡ΊπŸ‡¦@LukesBeard
Good lord, @typefully is very good.
Avatar
Anna David@annabdavid
I forgot about Twitter for 10 years. Now I'm remembering why I liked it in the first place. Huge part of my new love for it: @typefully. It makes writing threads easy and oddly fun.
Avatar
DHH@dhh
This is my new go-to writing environment for Twitter threads. They've built something wonderfully simple and distraction free with Typefully.
Avatar
ian hollander@ianhollander
Such a huge fan of what @typefully has brought to the writing + publishing experience for Twitter. Easy, elegant and almost effortless threads - and a beautiful UX that feels intuitive for the task - and inviting to use.
Avatar
Luca Rossi ꩜@lucaronin
After trying literally all the major Twitter scheduling tools, I settled with @typefully. Kudos to @frankdilo and @linuz90 for building such a delightful experience. Killer feature to me is the native image editor β€” unique and super useful πŸ™

Queue your content inΒ seconds

Write, schedule and boost your tweets - withΒ noΒ need forΒ extra apps.

Schedule with one click

Queue your tweet with a single click - or pick a time manually.

Pick the perfect time

Time each tweet to perfection with Typefully's performance analytics.

Boost your content

Retweet and plug your tweets for automated engagement.

Queue

Start creating a content queue.

Tweet with dailyΒ inspiration

Break through writer's block with great ideas and suggestions.

Start with a fresh idea

Enjoy daily prompts and ideas to inspire your writing.

Check examples out

Get inspiration from tweets that used these prompts and ideas.

Flick through topics

Or skim through curated collections of trending tweets for each topic.

Prompts

Check the analytics thatΒ matter

Build your audience with insights that makeΒ sense.

Tweets

Write, edit, and track tweetsΒ together

Write and publish with your teammates andΒ friends.

@frankdilo
@albigiu

Share your drafts

Brainstorm and bounce ideas with your teammates.

@frankdilo

@frankdilo

Feedback?

@albigiu

@albigiu

Love it πŸ”₯

Add comments

NEW

Get feedback from coworkers before you hit publish.

Read, Write, Publish

Read, WriteRead

Control user access

Decide who can view, edit, or publish your drafts.

Build an automated tweetΒ machine

Our Zapier integration enables countless no-code workflows.

TypefullySlack

Share new drafts in Slack channel

RSSTypefully

New draft from RSS feed item content

DocsTypefully

New scheduled draft from Google Doc

TypefullySheets

New spreadsheet row from published tweet

ScheduleTypefully

Create new template draft every Monday

TypefullyGmail

Send an email for every published thread

FeedlyTypefully

Create draft for new items in feeds folder

TwitterTypefully

Thank new followers with a tweet

TypefullySlack

Share new drafts in Slack channel

RSSTypefully

New draft from RSS feed item content

DocsTypefully

New scheduled draft from Google Doc

TypefullySheets

New spreadsheet row from published tweet

ScheduleTypefully

Create new template draft every Monday

TypefullyGmail

Send an email for every published thread

FeedlyTypefully

Create draft for new items in feeds folder

TwitterTypefully

Thank new followers with a tweet

Ready to write better tweets and grow your audience?

Get started with our generous free plan.

Typefully

Β© 2022 Mailbrew Inc.

Privacy

Terms

Contact us

Work with us

Product

Pricing

Changelog

Keyboard shortcuts

Invite teammates

Affiliate program

Grow on Twitter

Typefully Academy

Get a social blog

Automate with Zapier

Boost engagement

Popular profiles

Twitter Card Validator

Help & Social

Help pages

Brand assets

Twitter

Blog

Announcements

Typefully

@DanHollick

Typefully

@DanHollick

How ChatGPT works: a deep dive

Avatar

Share

Β β€’Β 

A month ago

Β β€’Β 

View on Twitter

How does a Large Language Model like ChatGPT actually work? Well, they are both amazingly simple and exceedingly complex at the same time. Hold on to your butts, this is a deep dive ↓
You can think of a model as calculating the probabilities of an output based on some input. In language models, this means that given a sequences of words they calculate the probabilities for the next word in the sequence. Like a fancy autocomplete.
To understand where these probabilities come from, we need to talk about something called a neural network. This is a network like structure where numbers are fed into one side and probabilities are spat out the other. They are simpler than you might think.
Imagine we wanted to train a computer to solve the simple problem of recognising symbols on a 3x3 pixel display. We would need a neural net like this: - an input layer - two hidden layers - an output layer
Our input layer consists of 9 nodes called neurons - one for each pixel. Each neuron would hold a number from 1 (white) to -1 (black). Our output layer consists of 4 neurons, one for each of the possible symbols. Their value will eventually be a probability between 0 and 1.
In between these, we have rows of neurons, called "hidden" layers. For our simple use case we only need two. Each neuron is connected to the neurons in the adjacent layers by a weight, which can have a value between -1 and 1.
When a value is passed from the input neuron to the next layer its multiplied by the weight. That neuron then simply adds up all the values it receives, squashes the value between -1 and 1 and passes it to each neuron in the next layer.
The neuron in the final hidden layer does the same but squashes the value between 0 and 1 and passes that to the output layer. Each neuron in the output layer then holds a probability and the highest number is the most probable result.
When we train this network, we feed it an image we know the answer to and calculate the difference between the answer and the probability the net calculated. We then adjust the weights to get closer to the expected result. But how do we know *how* to adjust the weights?
I won't go into detail, but we use clever mathematical techniques called gradient descent and back propagation to figure out what value for each weight will give us the lowest error. We keep repeating this process until we are satisfied with the model's accuracy.
This is called a feed forward neural net - but this simple structure won't be enough to tackle the problem of natural language processing. Instead LLMs tend to use a structure called a transformer and it has some key concepts that unlock a lot of potential.
First, lets talk about words. Instead of each word being an input, we can break words down into tokens which can be words, subwords, characters or symbols. Notice how they even include the space.
Much like in our model we represent the pixel value as a number between 0 and 1, these tokens also need to be represented as a number. We could just give each token a unique number and call it a day but there's another way we can represent them that adds more context.
We can store each token in a multi-dimensional vector that indicates it's relationship to other tokens. For simplicity, imagine a 2D plane on which we plot the location of words. We want words with similar meanings to be grouped closer together. This is called an embedding.
Embeddings help create relationships between similar words but they also capture analogies. For example the distance between the words dog and puppy should be the same as the distance between cat and kitten. We can also create embeddings for whole sentences.
The first part of the transformer is to encode our input words into these embeddings. Those embeddings are then fed to the next process, called attention which adds even more context to embeddings. Attention is massively important in natural language processing.
Embeddings struggle to capture words with multiple meanings. Consider the two meanings of 'bank'. Humans derive the correct meaning based on the context of the sentence. 'Money' and 'River' are contextually important to the word bank in each of these sentences.
The process of attention looks back through the sentence for words that provide context to the word bank. It then re-weights the embedding so that the word bank is semantically closer to the word 'river' or 'money'.
This attention process happens many times over to capture the context of the sentence in multiple dimensions. After all this, the contextualised embeddings are eventually passed into a neural net, like our simple one from earlier, that produces probabilities.
That is a massively simplified version of how an LLM like ChatGPT actually works. There is so much I've left out or skimmed over for the sake of brevity (this is the 20th tweet). If I left something important out or got some detail wrong, let me know.
I've been researching this thread on/off for ~6 months. Here are some of the best resources: Obviously this absolute unit of a piece by Stephen Wolfram which goes into much more detail than I have here: https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/
For something a bit more approachable, this @3blue1brown video series about how neural nets learn. There are some lovely graphics here, even if you don't follow all the maths. https://youtu.be/aircAruvnKk?list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi
And @CohereAI's blog has some excellent pieces and videos explaining embeddings, attention and transformers. https://txt.cohere.ai/what-is-attention-in-language-models/ https://txt.cohere.ai/sentence-word-embeddings/
You may now let go of your butts.
Avatar

Dan Hollick πŸ‡ΏπŸ‡¦

@DanHollick

product designer - tweets about design systems and tools. Framer course: advancedframer.com prev: @tidal, @barclays

Follow on Twitter
My website