Craft and publish engaging content in an app built for creators.
NEW
Publish anywhere
Post on LinkedIn, Threads, & Mastodon at the same time, in one click.
Make it punchier 👊
Typefully
@typefully
We're launching a Command Bar today with great commands and features.
AI ideas and rewrites
Get suggestions, tweet ideas, and rewrites powered by AI.
Turn your tweets & threads into a social blog
Give your content new life with our beautiful, sharable pages. Make it go viral on other platforms too.
+14
Followers
Powerful analytics to grow faster
Easily track your engagement analytics to improve your content and grow faster.
Build in public
Share a recent learning with your followers.
Create engagement
Pose a thought-provoking question.
Never run out of ideas
Get prompts and ideas whenever you write - with examples of popular tweets.
@aaditsh
I think this thread hook could be improved.
@frankdilo
On it 🔥
Share drafts & leave comments
Write with your teammates and get feedback with comments.
NEW
Easlo
@heyeaslo
Reply with "Notion" to get early access to my new template.
Jaga
@kandros5591
Notion 🙏
DM Sent
Create giveaways with Auto-DMs
Send DMs automatically based on engagement with your tweets.
And much more:
Auto-Split Text in Posts
Thread Finisher
Tweet Numbering
Pin Drafts
Connect Multiple Accounts
Automatic Backups
Dark Mode
Keyboard Shortcuts
Creators love Typefully
180,000+ creators and teams chose Typefully to curate their Twitter presence.
Marc Köhlbrugge@marckohlbrugge
Tweeting more with @typefully these days.
🙈 Distraction-free
✍️ Write-only Twitter
🧵 Effortless threads
📈 Actionable metrics
I recommend giving it a shot.
Jurre Houtkamp@jurrehoutkamp
Typefully is fantastic and way too cheap for what you get.
We’ve tried many alternatives at @framer but nothing beats it. If you’re still tweeting from Twitter you’re wasting time.
DHH@dhh
This is my new go-to writing environment for Twitter threads.
They've built something wonderfully simple and distraction free with Typefully 😍
Santiago@svpino
For 24 months, I tried almost a dozen Twitter scheduling tools.
Then I found @typefully, and I've been using it for seven months straight.
When it comes down to the experience of scheduling and long-form content writing, Typefully is in a league of its own.
Luca Rossi ꩜@lucaronin
After trying literally all the major Twitter scheduling tools, I settled with @typefully.
Killer feature to me is the native image editor — unique and super useful 🙏
Visual Theory@visualtheory_
Really impressed by the way @typefully has simplified my Twitter writing + scheduling/publishing experience.
Beautiful user experience.
0 friction.
Simplicity is the ultimate sophistication.
Queue your content in seconds
Write, schedule and boost your tweets - with no need for extra apps.
Schedule with one click
Queue your post with a single click - or pick a time manually.
Pick the perfect time
Time each post to perfection with Typefully's performance analytics.
Boost your content
Retweet and plug your posts for automated engagement.
Start creating a content queue.
Write once, publish everywhere
We natively support multiple platforms, so that you can expand your reach easily.
Check the analytics that matter
Build your audience with insights that make sense.
Writing prompts & personalized post ideas
Break through writer's block with great ideas and suggestions.
Never run out of ideas
Enjoy daily prompts and ideas to inspire your writing.
Use AI for personalized suggestions
Get inspiration from ideas based on your own past tweets.
Flick through topics
Or skim through curated collections of trending tweets for each topic.
Write, edit, and track tweets together
Write and publish with your teammates and friends.
Share your drafts
Brainstorm and bounce ideas with your teammates.
NEW
@aaditsh
I think this thread hook could be improved.
@frankdilo
On it 🔥
Add comments
Get feedback from coworkers before you hit publish.
Read, Write, Publish
Read, WriteRead
Control user access
Decide who can view, edit, or publish your drafts.
(1/25) @ethereum roadmap: Rollups, Danksharding and Settlement Outside the EVM
Rollups are predicated on the idea that the complete, final record exists on-chain. Danksharding is about pushing this data outside the EVM (off-chain).
What does this mean for the Ethereum endgame?
(3/25) @ethereum is the World Computer, a globally shared computing platform that exists in the space between a network of 1000s of computers (nodes).
The Ethereum Virtual Machine (EVM) provides the virtual computer, the blockchain recording its history.
inevitableeth.com/home/ethereum/world-computer
(4/25) Each node runs a local version of the EVM, which is then held perfectly in sync with every other copy of the EVM through a process called Proof of Stake (PoS).
Any individual EVM is a window into the shared state of the World Computer.
inevitableeth.com/home/ethereum/network/consensus/pos
(5/25) At the end of the day, real computers need to run the @ethereum software. And so, the World Computer is limited by the minimum requirements it sets for nodes.
Here lies a fundamental trade-off: higher minimum requirements = less decentralization.
(6/25) Enter @ethereum's rollup-centric roadmap:
- computation will migrate from the EVM to rollups
- rollups will settle to Ethereum (post the final copy of ownership to mainnet)
- Danksharding will increase Ethereum's capacity for this data
inevitableeth.com/en/home/ethereum/upgrades/scaling
(7/25) From this point, I'll assume you are familiar with rollups:
- Optimistic rollups automatically accept batches, but leave a challenge period for fraud proofs
- ZK-rollups submit validity proofs along with batches
If not, here's your resource.
inevitableeth.com/en/home/ethereum/upgrades/scaling/execution
(9/25) In order to achieve @ethereum-based settlement, each rollup has a smart contract on mainnet.
The external rollup chain will execute thousands of transactions and then periodically send a compressed bundle to this smart contract.
(10/25) Today, there is one place to "store" data on @ethereum: you can pass it into a smart contract in the same way you would pass variables and other computational variables.
Herein lies the solution to the data availability bottleneck: create dedicated space for rollups.
(11/25) Long term, we have a solution: Danksharding.
Tl;dr @ethereum will gain "blobs" (large data chunks inaccessible by the EVM). Ethereum will guarantee the availability all blob data for a long but limited amount of time.
inevitableeth.com/en/home/ethereum/upgrades/scaling/data
(12/25) Post-Danksharding, when a blob is posted to @ethereum it will be immediately archived by services like @etherscan or the Portal Network.
Then after ~1 month, blobs will expire. Only a cryptographic signature will remain on the blockchain/within the EVM in perpetuity.
(13/25) At this point, there may be a nagging feeling in the back of your head; I've mentioned a few times now that blobs wont be accessible by the EVM...
The problem is that rollups, both optimistic and zk, need access to this data in order to function...
...right?
(14/25) Of course not! We've got the magic of KZG Commitments!
Tl;dr KZG commitments allow each blob to be compressed to a single value. Using elliptic curve cryptography, you can use this value to prove a specific piece of data existed in a blob.
inevitableeth.com/home/concepts/kzg-commitment
(15/25) Every single time a new block is created, a block builder will first gather all of the blobs and compute KZG commitments and add them to the block.
Though the EVM wont have access to the blobs directly, it will have access to these commitments.
(16/25) This is the true magic of KZG commitments - of elliptic curve cryptography! Using just this lightweight commitment, the EVM effectively has access into the blobs...
...even after they expire!
All with cryptographic, trustless certainty.
(17/25) So let's talk specifically about how this would be implemented. Once again, if you don't have a good understanding of how rollups work, please refer to the link on tweet 7
Let's imagine an optimistic and zk rollup in a post-Danksharding world.
(18/25) By definition, optimistic rollups don't need access to the blob data when its posted; they only need it when fraud proofs are being submitted.
Posting an update to an optimistic rollup would remain largely unchanged, simply adding a reference to the KZG commitment.
(19/25) During a fraud proof challenge period, the rollup-smart contract would need access to all the previous txns (stored in blobs).
The fraud proof will include KZG proofs, which will be verified against the stored reference; then verification can continue as it does today.
(20/25) On the other hand, the smart contract for zk-rollups don't need access to the transaction data ever - however, they do need to prove that they did post the transaction data to @ethereum.
Just to over-emphasize, settlement requires transaction data to be made available.
(21/25) Post Danksharding, zk-rollups will post 2 commitments:
- whatever zero-knowledge proof it uses internally
- the KZG commitment for the data in the blob with the transaction data
The the smart contract will verify both of these commitments refer to the same data.
(23/25) And that is how rollups are going to work in a post-Danksharding world.
The big takeaways:
- Danksharding will create blobs, data external to but provable by the EVM
- Rollups will use the same paradigms we have today, but will rely on KZG commitments