Built for 𝕏∙LinkedIn∙Bluesky∙Threads. Powered by AI
Write & schedule, effortlessly
Craft and publish engaging content in an app built for creators.
NEW
Publish anywhere
Publish on X, LinkedIn, Bluesky, Threads, & Mastodon at the same time.
Make it punchier 👊
Typefully
@typefully
We're launching a Command Bar today with great commands and features.
AI ideas and rewrites
Get suggestions, tweet ideas, and rewrites powered by AI.
Turn your tweets & threads into a social blog
Give your content new life with our beautiful, sharable pages. Make it go viral on other platforms too.
+14
Followers
Powerful analytics to grow faster
Easily track your engagement analytics to improve your content and grow faster.
Build in public
Share a recent learning with your followers.
Create engagement
Pose a thought-provoking question.
Never run out of ideas
Get prompts and ideas whenever you write - with examples of popular tweets.
@aaditsh
I think this thread hook could be improved.
@frankdilo
On it 🔥
Share drafts & leave comments
Write with your teammates and get feedback with comments.
NEW
Easlo
@heyeaslo
Reply with "Notion" to get early access to my new template.
Jaga
@kandros5591
Notion 🙏
DM Sent
Create giveaways with Auto-DMs
Send DMs automatically based on engagement with your tweets.
And much more:
Auto-Split Text in Posts
Thread Finisher
Tweet Numbering
Pin Drafts
Connect Multiple Accounts
Automatic Backups
Dark Mode
Keyboard Shortcuts
Creators love Typefully
180,000+ creators and teams chose Typefully to curate their Twitter presence.
Marc Köhlbrugge@marckohlbrugge
Tweeting more with @typefully these days.
🙈 Distraction-free
✍️ Write-only Twitter
🧵 Effortless threads
📈 Actionable metrics
I recommend giving it a shot.
Jurre Houtkamp@jurrehoutkamp
Typefully is fantastic and way too cheap for what you get.
We’ve tried many alternatives at @framer but nothing beats it. If you’re still tweeting from Twitter you’re wasting time.
DHH@dhh
This is my new go-to writing environment for Twitter threads.
They've built something wonderfully simple and distraction free with Typefully 😍
Santiago@svpino
For 24 months, I tried almost a dozen Twitter scheduling tools.
Then I found @typefully, and I've been using it for seven months straight.
When it comes down to the experience of scheduling and long-form content writing, Typefully is in a league of its own.
Luca Rossi ꩜@lucaronin
After trying literally all the major Twitter scheduling tools, I settled with @typefully.
Killer feature to me is the native image editor — unique and super useful 🙏
Visual Theory@visualtheory_
Really impressed by the way @typefully has simplified my Twitter writing + scheduling/publishing experience.
Beautiful user experience.
0 friction.
Simplicity is the ultimate sophistication.
Queue your content in seconds
Write, schedule and boost your tweets - with no need for extra apps.
Schedule with one click
Queue your post with a single click - or pick a time manually.
Pick the perfect time
Time each post to perfection with Typefully's performance analytics.
Boost your content
Retweet and plug your posts for automated engagement.
Start creating a content queue.
Write once, publish everywhere
We natively support multiple platforms, so that you can expand your reach easily.
Check the analytics that matter
Build your audience with insights that make sense.
Writing prompts & personalized post ideas
Break through writer's block with great ideas and suggestions.
Never run out of ideas
Enjoy daily prompts and ideas to inspire your writing.
Use AI for personalized suggestions
Get inspiration from ideas based on your own past tweets.
Flick through topics
Or skim through curated collections of trending tweets for each topic.
Write, edit, and track tweets together
Write and publish with your teammates and friends.
Share your drafts
Brainstorm and bounce ideas with your teammates.
NEW
@aaditsh
I think this thread hook could be improved.
@frankdilo
On it 🔥
Add comments
Get feedback from coworkers before you hit publish.
Read, Write, Publish
Read, WriteRead
Control user access
Decide who can view, edit, or publish your drafts.
Wrong twice this week!
I've been suggesting self-consistency as a way to scale compute against accuracy. Turns out it doesn't work and there are better ways.
Way too many useful things in this frankly underrated paper I wish I read sooner👇
The first is about self-consistency, which is just running models at high temperature and looking for consensus among the results.
The argument the paper makes against it has an extra link, but here it is:
1. Deductive paths with Chain of Thought in them are more likely to have a correct answer.
2. Answers from CoT paths often don't represent the predominant answer among potential paths, which means they're more likely to be wrong.
Live and learn I guess - but what excites me about this, is the next thing.
I'd always imagined LLMs as needing CoT prompts to elicit answers, and that you could finetune in responding with CoT tokens.
Turns out that's not always true. Even at the first token, CoT paths usually exist - and it makes sense at larger model sizes. However,
They usually don't if you pick the most probable token. My longstanding guess is that users prefer LLMs and humans that answer immediately without prevaricating.
I've found myself trying to answer quickly instead of wanting to think through a problem.
The improvements (as much as benchmarks are hard to believe these days) make a lot of sense.
What's also amazing is that with CoT, models are more confident about their answers in the token probability. How does it work?
Simple - at the very first token, they run through n parallel paths of decreasing probability. They extract the answer from each, then pick the answer with the highest total confidence value.
This is also a good way to test intrinsic reasoning capabilities (since the prompting is just Q: A:) across models and tasks.
The performance improvement also sticks around, even when you add chain-of-thought prompting 🤯
There are a few things useful to sampling-based approaches here. The first is that early branching (and diversity) in token selection is significantly better than later on. This is different from Entropix, which (in my limited understanding) don't vary temp much against length.
The intrinsic CoT paths also reveals things about models in their base state.
For one, models tend to do math left to right instead of the correct order.
Tey find paths harder the more steps there are or as tasks become more complex. State tracking becomes especially hard.
They also suggest that CoT prompting in a lot of cases just causes the model to 'mimic' the suggestion in the prompt - which can be good and bad depending on what you're trying to do.