Craft and publish engaging content in an app built for creators.
NEW
Publish anywhere
Post on LinkedIn, Threads, & Mastodon at the same time, in one click.
Make it punchier 👊
Typefully
@typefully
We're launching a Command Bar today with great commands and features.
AI ideas and rewrites
Get suggestions, tweet ideas, and rewrites powered by AI.
Turn your tweets & threads into a social blog
Give your content new life with our beautiful, sharable pages. Make it go viral on other platforms too.
+14
Followers
Powerful analytics to grow faster
Easily track your engagement analytics to improve your content and grow faster.
Build in public
Share a recent learning with your followers.
Create engagement
Pose a thought-provoking question.
Never run out of ideas
Get prompts and ideas whenever you write - with examples of popular tweets.
@aaditsh
I think this thread hook could be improved.
@frankdilo
On it 🔥
Share drafts & leave comments
Write with your teammates and get feedback with comments.
NEW
Easlo
@heyeaslo
Reply with "Notion" to get early access to my new template.
Jaga
@kandros5591
Notion 🙏
DM Sent
Create giveaways with Auto-DMs
Send DMs automatically based on engagement with your tweets.
And much more:
Auto-Split Text in Posts
Thread Finisher
Tweet Numbering
Pin Drafts
Connect Multiple Accounts
Automatic Backups
Dark Mode
Keyboard Shortcuts
Creators love Typefully
180,000+ creators and teams chose Typefully to curate their Twitter presence.
Marc Köhlbrugge@marckohlbrugge
Tweeting more with @typefully these days.
🙈 Distraction-free
✍️ Write-only Twitter
🧵 Effortless threads
📈 Actionable metrics
I recommend giving it a shot.
Jurre Houtkamp@jurrehoutkamp
Typefully is fantastic and way too cheap for what you get.
We’ve tried many alternatives at @framer but nothing beats it. If you’re still tweeting from Twitter you’re wasting time.
DHH@dhh
This is my new go-to writing environment for Twitter threads.
They've built something wonderfully simple and distraction free with Typefully 😍
Santiago@svpino
For 24 months, I tried almost a dozen Twitter scheduling tools.
Then I found @typefully, and I've been using it for seven months straight.
When it comes down to the experience of scheduling and long-form content writing, Typefully is in a league of its own.
Luca Rossi ꩜@lucaronin
After trying literally all the major Twitter scheduling tools, I settled with @typefully.
Killer feature to me is the native image editor — unique and super useful 🙏
Visual Theory@visualtheory_
Really impressed by the way @typefully has simplified my Twitter writing + scheduling/publishing experience.
Beautiful user experience.
0 friction.
Simplicity is the ultimate sophistication.
Queue your content in seconds
Write, schedule and boost your tweets - with no need for extra apps.
Schedule with one click
Queue your post with a single click - or pick a time manually.
Pick the perfect time
Time each post to perfection with Typefully's performance analytics.
Boost your content
Retweet and plug your posts for automated engagement.
Start creating a content queue.
Write once, publish everywhere
We natively support multiple platforms, so that you can expand your reach easily.
Check the analytics that matter
Build your audience with insights that make sense.
Writing prompts & personalized post ideas
Break through writer's block with great ideas and suggestions.
Never run out of ideas
Enjoy daily prompts and ideas to inspire your writing.
Use AI for personalized suggestions
Get inspiration from ideas based on your own past tweets.
Flick through topics
Or skim through curated collections of trending tweets for each topic.
Write, edit, and track tweets together
Write and publish with your teammates and friends.
Share your drafts
Brainstorm and bounce ideas with your teammates.
NEW
@aaditsh
I think this thread hook could be improved.
@frankdilo
On it 🔥
Add comments
Get feedback from coworkers before you hit publish.
Read, Write, Publish
Read, WriteRead
Control user access
Decide who can view, edit, or publish your drafts.
This plot from @OpenAI's Scaling Laws for Neural Language Models is widely referred to when discussing training dynamics, but did you know it has almost no bearing on what massive LM training curves look like? (1/12)
You can read this thread unrolled at: typefully.com/BlancheMinerva/mPialqw
Here are the same plot (as the one on the left) from GPT-NeoX 20B, for both training and validation loss. Why don't we see the same effect of a "burn-in" period? 🤔🤔🤔 (2/12)
I asked this same question in the #EleutherAI discord an hour ago and was blown away by what an eagle-eyed observer called thrasher pointed out: my plot starts too far to the right for the burn-in to show up! (3/12)
When looking at OpenAI's plot, it's easy to assume that it represents the training curve of large models. But 10^9 is "only" 1B: my model is 20B params and was trained for 400B tokens. The burn-in period ends at around the 20M token mark (4/12)
That's ~0.05% of the way through training! I logged my first loss value at 300 steps, thinking that was surely early enough to reveal anything interesting. But 300 steps of a model with a batch size of 3.1M is nearly 1B tokens: my logging can only capture the tail-end (5/12)
of the plot. The OpenAI plot shows two phase transitions, one just before 10^8 and one just before 10^9. The first phase transition occurs in the first 0.05% and the second in the first 0.25% of the training of my 20B parameter model. For Google's recent PaLM, (6/12)
those numbers drop to 0.025% and 0.125% respectively. More than 99.5% of training occurs in the "flatlined" regime after the second phase transition. Below are various evaluation benchmarks. Note that the *entire curve* is to the right of the second phase transition (7/12)
So clearly there's a lot of interesting stuff happening that OpenAI's plot doesn't tell us about.
Warning: these didn't go until the end of training. The final eval numbers for the model are:
Lambada: 0.720
HellaSwag: 0.535
PiQA: 0.779
Winogrande: 0.661
(8/12)
(I don't have MathQA and PubMedQA yet).
In my 3 month training run we blew past all the stuff it shows in two days! That early in training LLMs are still producing complete garbage; they don't have an idea of spelling let alone grammar or anything of substance.
(9/12)
performance improves substantially. Additionally, they claim that this is where “induction heads” begin to appear. I haven’t looked into this personally yet, but it seems like a very interesting set of ideas. And it does make sense for 0-shot to develop after few-shot IMO (11/12)
As a reminder, intermediate checkpoints for GPT-NeoX 20B are available, DM me if you would like to experiment with them.
Also DM me if you want to experiment on GPT-NeoX but lack the compute! EleutherAI is happy to provide free compute to anyone doing research on it.
(12/12)