Get fresh prompts and ideas whenever you write - with examples of popular tweets.
The best writing experience, with powerful scheduling
Create your content without distractions - where it's easier to write and organize threads.
Cross-post to LinkedIn
NEW
Automatically add LinkedIn versions to your posts.
+14
Followers
Discover what works with powerful analytics
Easily track your engagement analytics to improve your content and grow faster.
@frankdilo
Feedback?
@albigiu
Love it 🔥
Collaborate on drafts and leave comments
Write with your teammates and get feedback with comments.
🧵
Rewrite as thread start
🔥
Make it Punchier
✅
Fix Grammar
Improve your content with AI suggestions and rewrites
Get suggestions, tweet ideas, and rewrites powered by AI.
And much more:
Auto-Split Text in Tweets
Thread Finisher
Tweet Numbering
Pin Drafts
Connect Multiple Accounts
Automatic Backups
Dark Mode
Keyboard Shortcuts
Top tweeters love Typefully
100,000+ creators and teams chose Typefully to curate their Twitter presence. Join them.
Santiago@svpino
For 24 months, I tried almost a dozen Twitter scheduling tools.
Then I found @typefully, and I've been using it for seven months straight.
When it comes down to the experience of scheduling and long-form content writing, Typefully is in a league of its own.
I forgot about Twitter for 10 years. Now I'm remembering why I liked it in the first place.
Huge part of my new love for it: @typefully. It makes writing threads easy and oddly fun.
DHH@dhh
This is my new go-to writing environment for Twitter threads.
They've built something wonderfully simple and distraction free with Typefully.
ian hollander@ianhollander
Such a huge fan of what @typefully has brought to the writing + publishing experience for Twitter.
Easy, elegant and almost effortless threads - and a beautiful UX that feels intuitive for the task - and inviting to use.
Luca Rossi ꩜@lucaronin
After trying literally all the major Twitter scheduling tools, I settled with @typefully.
Kudos to @frankdilo and @linuz90 for building such a delightful experience.
Killer feature to me is the native image editor — unique and super useful 🙏
Queue your content in seconds
Write, schedule and boost your tweets - with no need for extra apps.
Schedule with one click
Queue your tweet with a single click - or pick a time manually.
Pick the perfect time
Time each tweet to perfection with Typefully's performance analytics.
Boost your content
Retweet and plug your tweets for automated engagement.
Start creating a content queue.
Tweet with daily inspiration
Break through writer's block with great ideas and suggestions.
Start with a fresh idea
Enjoy daily prompts and ideas to inspire your writing.
Check examples out
Get inspiration from tweets that used these prompts and ideas.
Flick through topics
Or skim through curated collections of trending tweets for each topic.
Check the analytics that matter
Build your audience with insights that make sense.
Write, edit, and track tweets together
Write and publish with your teammates and friends.
Share your drafts
Brainstorm and bounce ideas with your teammates.
@frankdilo
Feedback?
@albigiu
Love it 🔥
Add comments
NEW
Get feedback from coworkers before you hit publish.
Read, Write, Publish
Read, WriteRead
Control user access
Decide who can view, edit, or publish your drafts.
The batch size is one of the most influential parameters in the outcome of a neural network.
Here is everything you need to know about the batch size when training a neural network:
Gradient Descent is an optimization algorithm to train neural networks.
The algorithm computes how much we need to adjust the model to get closer to the desired results on every iteration.
Here is how it works in a couple of sentences:
We take samples from the training dataset, run them through the model, and determine how far away our results are from the ones we expect.
We use this "error" to compute how much we need to update the model weights to improve the results.
A critical decision we need to make is how many samples we use on every iteration.
We have three choices:
• Use a single sample of data
• Use all of the data at once
• Use some of the data
Using a single sample of data on every iteration is called "Stochastic Gradient Descent" (SGD.)
The algorithm uses one sample at a time to compute the updates.
Advantages of Stochastic Gradient Descent:
• Simple to understand
• Avoids getting stuck in local minima
• Provides immediate feedback
Disadvantages:
• Computationally intensive
• May not settle in the global minimum
• The performance will be noisy
Using all the data at once is called "Batch Gradient Descent."
After processing every sample, the algorithm takes the entire dataset and computes the updates.
Advantages of Batch Gradient Descent:
• Computationally efficient
• Stable performance (less noise)
Disadvantages:
• Requires a lot of memory
• May get stuck in local minima
Using some data (more than one sample but fewer than the entire dataset) is called "Mini-Batch Gradient Descent."
The algorithm works like Batch Gradient Descent, the only difference being that we use fewer samples.
Advantages of Mini-Batch Gradient Descent:
• Avoids getting stuck in local minima
• More computationally efficient than SGD
• Doesn't need as much memory as BGD
Disadvantages:
• New hyperparameter to worry about
We usually call this hyperparameter "batch_size."
We rarely use Batch Gradient Descent in practice, especially with large datasets.
Stochastic Gradient Descent (using a single sample at a time) is not that popular either.
Mini-Batch is the most used.
There's a lot of research around the optimal batch size.
Your problem differs from any other problem, but empirical evidence suggests that smaller batches perform better.
(Small as in less than a hundred or so.)
Here is a good paper talking about recommendations for the batch size:
https://arxiv.org/abs/1206.5533
They recommend using 32 as a good default value.
A few notes:
• Training time: 263s, 21s, and 8s, respectively.
• Testing accuracy: 0.62, 0.77, 0.78, respectively.
• Attached images show the noise in the testing loss.
Noticeable differences!
Every week, I break down machine learning concepts to give you ideas on applying them in real-life situations.
Follow me @svpino to ensure you don't miss what's coming next.