Craft and publish engaging content in an app built for creators.
NEW
Publish anywhere
Post on LinkedIn, Threads, & Mastodon at the same time, in one click.
Make it punchier 👊
Typefully
@typefully
We're launching a Command Bar today with great commands and features.
AI ideas and rewrites
Get suggestions, tweet ideas, and rewrites powered by AI.
Turn your tweets & threads into a social blog
Give your content new life with our beautiful, sharable pages. Make it go viral on other platforms too.
+14
Followers
Powerful analytics to grow faster
Easily track your engagement analytics to improve your content and grow faster.
Build in public
Share a recent learning with your followers.
Create engagement
Pose a thought-provoking question.
Never run out of ideas
Get prompts and ideas whenever you write - with examples of popular tweets.
@aaditsh
I think this thread hook could be improved.
@frankdilo
On it 🔥
Share drafts & leave comments
Write with your teammates and get feedback with comments.
NEW
Easlo
@heyeaslo
Reply with "Notion" to get early access to my new template.
Jaga
@kandros5591
Notion 🙏
DM Sent
Create giveaways with Auto-DMs
Send DMs automatically based on engagement with your tweets.
And much more:
Auto-Split Text in Posts
Thread Finisher
Tweet Numbering
Pin Drafts
Connect Multiple Accounts
Automatic Backups
Dark Mode
Keyboard Shortcuts
Creators love Typefully
170,000+ creators and teams chose Typefully to curate their Twitter presence.
Marc Köhlbrugge@marckohlbrugge
Tweeting more with @typefully these days.
🙈 Distraction-free
✍️ Write-only Twitter
🧵 Effortless threads
📈 Actionable metrics
I recommend giving it a shot.
Jurre Houtkamp@jurrehoutkamp
Typefully is fantastic and way too cheap for what you get.
We’ve tried many alternatives at @framer but nothing beats it. If you’re still tweeting from Twitter you’re wasting time.
DHH@dhh
This is my new go-to writing environment for Twitter threads.
They've built something wonderfully simple and distraction free with Typefully 😍
Santiago@svpino
For 24 months, I tried almost a dozen Twitter scheduling tools.
Then I found @typefully, and I've been using it for seven months straight.
When it comes down to the experience of scheduling and long-form content writing, Typefully is in a league of its own.
Luca Rossi ꩜@lucaronin
After trying literally all the major Twitter scheduling tools, I settled with @typefully.
Killer feature to me is the native image editor — unique and super useful 🙏
Visual Theory@visualtheory_
Really impressed by the way @typefully has simplified my Twitter writing + scheduling/publishing experience.
Beautiful user experience.
0 friction.
Simplicity is the ultimate sophistication.
Queue your content in seconds
Write, schedule and boost your tweets - with no need for extra apps.
Schedule with one click
Queue your post with a single click - or pick a time manually.
Pick the perfect time
Time each post to perfection with Typefully's performance analytics.
Boost your content
Retweet and plug your posts for automated engagement.
Start creating a content queue.
Write once, publish everywhere
We natively support multiple platforms, so that you can expand your reach easily.
Check the analytics that matter
Build your audience with insights that make sense.
Writing prompts & personalized post ideas
Break through writer's block with great ideas and suggestions.
Never run out of ideas
Enjoy daily prompts and ideas to inspire your writing.
Use AI for personalized suggestions
Get inspiration from ideas based on your own past tweets.
Flick through topics
Or skim through curated collections of trending tweets for each topic.
Write, edit, and track tweets together
Write and publish with your teammates and friends.
Share your drafts
Brainstorm and bounce ideas with your teammates.
NEW
@aaditsh
I think this thread hook could be improved.
@frankdilo
On it 🔥
Add comments
Get feedback from coworkers before you hit publish.
Read, Write, Publish
Read, WriteRead
Control user access
Decide who can view, edit, or publish your drafts.
YR(τ,τ):
Further perspective on some potential competitors in the decentralized AI space, in particular Proof-of-Learning (arxiv.org/abs/2103.05633) versus Proof-of-Intelligence of Bittensor.
1. Proof-of-Learning constructs evidence that stochastic gradient descent was used to obtain model parameter updates. This evidence can then be verified in a fraction of the time it would take to fully reproduce the specific parameters update.
2. Proof-of-Learning is thus entirely focused on verifying the training process step-by-step, and not the quality of the final model output itself, done via an information-theoretic and game-theoretic measurement of relative usefulness toward a functional model objective.
3. Proof-of-Learning also presupposes that users have obtained a full training lifetime of verified model updates applied to render a final fully trained model that can then be employed.
It is then trusted that the previously verified models would then continue to function nominally, yet final performance is not verified.
4. AI blockchains based on Proof-of-Learning face a significant problem of bandwidth-insufficiency for large-scale distributed model training from scratch.
Instead, Bittensor principally operates as an intelligent routing network for Mixture-of-Experts where participant foundational models are already pretrained.
5. The Muskian limits-of-physics view on ML is that model capacity is limited, with the answer being diverse specialization of many foundational models over vast self-supervised data to fit limited expertise in each model.
Bittensor's Proof-of-Intelligence measures the depth of specialization and also promotes synergistic cooperation between models to smoothly cover multi-expert capabilities.
6. Post-RETRO transformers with large-scale retrieval capabilities are incentivized in Bittensor at the moment, since retrieval can significantly improve model scores often with direct lookup.
More sophisticated adversarial resilience is based on distillation proxies to combat even retrieval, but consensus at this threat-level is expensive.
7. The Bittensor protocol leaves the exact means of adversarial resilience open to change, as it will depend on the underlying model architecture.
Validators will have to employ more sophisticated defences over time as adversarial behaviours evolve, but the underlying Proof-of-Intelligence remains in place as the core consensus of value.
8. Potential competitors based on Proof-of-Learning will be inherently limited in the model sizes supported, and their incentives will focus on training and not final performance.
In contrast, Bittensor incentivizes large-scale fine-tuned foundation models of diverse expertise leveraging past compute/training efforts by e.g. the vibrant and growing HuggingFace community.
9. Bittensor capability expands at roughly the same rate as new generative LLMs are open-sourced, without the unnecessary concern of proving their iterative training process, when the real concern is just proving final utility.
#YR#AI#ML