Typefully

Write better tweets.
Grow your audience.

Now with AI and LinkedIn cross-posting.

Select Twitter account first

Join 100,000+ creators

@david_perell
@ev
@dhh
@marckohlbrugge
@maccaw
@warikoo
@adamwathan
Build in public

Share a recent learning with your followers.

Create engagement

Pose a thought-provoking question.

Curated writing prompts to never run out of ideas

Get fresh prompts and ideas whenever you write - with examples of popular tweets.

The best writing experience, with powerful scheduling

Create your content without distractions - where it's easier to write and organize threads.

Avatar
Avatar

Cross-post to LinkedIn

NEW

Automatically add LinkedIn versions to your posts.

+14

Followers

Discover what works with powerful analytics

Easily track your engagement analytics to improve your content and grow faster.

@frankdilo

@frankdilo

Feedback?

@albigiu

@albigiu

Love it 🔥

Collaborate on drafts and leave comments

Write with your teammates and get feedback with comments.

🧵

Rewrite as thread start

🔥

Make it Punchier

✅

Fix Grammar

Improve your content with AI suggestions and rewrites

Get suggestions, tweet ideas, and rewrites powered by AI.

And much more:

Auto-Split Text in Tweets

Thread Finisher

Tweet Numbering

Pin Drafts

Connect Multiple Accounts

Automatic Backups

Dark Mode

Keyboard Shortcuts

Top tweeters love Typefully

100,000+ creators and teams chose Typefully to curate their Twitter presence. Join them.

Avatar
Santiago@svpino
For 24 months, I tried almost a dozen Twitter scheduling tools. Then I found @typefully, and I've been using it for seven months straight. When it comes down to the experience of scheduling and long-form content writing, Typefully is in a league of its own.
Avatar
Luke Beard 🇺🇦@LukesBeard
Good lord, @typefully is very good.
Avatar
Anna David@annabdavid
I forgot about Twitter for 10 years. Now I'm remembering why I liked it in the first place. Huge part of my new love for it: @typefully. It makes writing threads easy and oddly fun.
Avatar
DHH@dhh
This is my new go-to writing environment for Twitter threads. They've built something wonderfully simple and distraction free with Typefully.
Avatar
ian hollander@ianhollander
Such a huge fan of what @typefully has brought to the writing + publishing experience for Twitter. Easy, elegant and almost effortless threads - and a beautiful UX that feels intuitive for the task - and inviting to use.
Avatar
Luca Rossi ꩜@lucaronin
After trying literally all the major Twitter scheduling tools, I settled with @typefully. Kudos to @frankdilo and @linuz90 for building such a delightful experience. Killer feature to me is the native image editor — unique and super useful 🙏

Queue your content in seconds

Write, schedule and boost your tweets - with no need for extra apps.

Schedule with one click

Queue your tweet with a single click - or pick a time manually.

Pick the perfect time

Time each tweet to perfection with Typefully's performance analytics.

Boost your content

Retweet and plug your tweets for automated engagement.

Queue

Start creating a content queue.

Tweet with daily inspiration

Break through writer's block with great ideas and suggestions.

Start with a fresh idea

Enjoy daily prompts and ideas to inspire your writing.

Check examples out

Get inspiration from tweets that used these prompts and ideas.

Flick through topics

Or skim through curated collections of trending tweets for each topic.

Prompts

Check the analytics that matter

Build your audience with insights that make sense.

Tweets

Write, edit, and track tweets together

Write and publish with your teammates and friends.

@frankdilo
@albigiu

Share your drafts

Brainstorm and bounce ideas with your teammates.

@frankdilo

@frankdilo

Feedback?

@albigiu

@albigiu

Love it 🔥

Add comments

NEW

Get feedback from coworkers before you hit publish.

Read, Write, Publish

Read, WriteRead

Control user access

Decide who can view, edit, or publish your drafts.

Build an automated tweet machine

Our Zapier integration enables countless no-code workflows.

TypefullySlack

Share new drafts in Slack channel

RSSTypefully

New draft from RSS feed item content

DocsTypefully

New scheduled draft from Google Doc

TypefullySheets

New spreadsheet row from published tweet

ScheduleTypefully

Create new template draft every Monday

TypefullyGmail

Send an email for every published thread

FeedlyTypefully

Create draft for new items in feeds folder

TwitterTypefully

Thank new followers with a tweet

TypefullySlack

Share new drafts in Slack channel

RSSTypefully

New draft from RSS feed item content

DocsTypefully

New scheduled draft from Google Doc

TypefullySheets

New spreadsheet row from published tweet

ScheduleTypefully

Create new template draft every Monday

TypefullyGmail

Send an email for every published thread

FeedlyTypefully

Create draft for new items in feeds folder

TwitterTypefully

Thank new followers with a tweet

Ready to write better tweets and grow your audience?

Get started with our generous free plan.

Typefully

© 2022 Mailbrew Inc.

Privacy

Terms

Contact us

Work with us

Product

Pricing

Changelog

Keyboard shortcuts

Invite teammates

Affiliate program

Grow on Twitter

Typefully Academy

Get a social blog

Automate with Zapier

Boost engagement

Popular profiles

Twitter Card Validator

Help & Social

Help pages

Brand assets

Twitter

Blog

Announcements

Typefully

@svpino

Typefully

@svpino

Everything you need to know about the batch size

Avatar

Share

 • 

8 months ago

 • 

View on Twitter

The batch size is one of the most influential parameters in the outcome of a neural network. Here is everything you need to know about the batch size when training a neural network:
I wrote a few experiments. You can run them too. I plotted everything using @Cometml. To run the notebook published at the end of this thread, you can create a free account here: https://comet.com/signup?utm_source=svpino&utm_medium=referral&utm_campaign=online_partner_svpino_2022 The process will take 10 seconds. Let's start:
Gradient Descent is an optimization algorithm to train neural networks. The algorithm computes how much we need to adjust the model to get closer to the desired results on every iteration. Here is how it works in a couple of sentences:
We take samples from the training dataset, run them through the model, and determine how far away our results are from the ones we expect. We use this "error" to compute how much we need to update the model weights to improve the results.
A critical decision we need to make is how many samples we use on every iteration. We have three choices: • Use a single sample of data • Use all of the data at once • Use some of the data
Using a single sample of data on every iteration is called "Stochastic Gradient Descent" (SGD.) The algorithm uses one sample at a time to compute the updates.
Advantages of Stochastic Gradient Descent: • Simple to understand • Avoids getting stuck in local minima • Provides immediate feedback Disadvantages: • Computationally intensive • May not settle in the global minimum • The performance will be noisy
Using all the data at once is called "Batch Gradient Descent." After processing every sample, the algorithm takes the entire dataset and computes the updates.
Advantages of Batch Gradient Descent: • Computationally efficient • Stable performance (less noise) Disadvantages: • Requires a lot of memory • May get stuck in local minima
Using some data (more than one sample but fewer than the entire dataset) is called "Mini-Batch Gradient Descent." The algorithm works like Batch Gradient Descent, the only difference being that we use fewer samples.
Advantages of Mini-Batch Gradient Descent: • Avoids getting stuck in local minima • More computationally efficient than SGD • Doesn't need as much memory as BGD Disadvantages: • New hyperparameter to worry about We usually call this hyperparameter "batch_size."
We rarely use Batch Gradient Descent in practice, especially with large datasets. Stochastic Gradient Descent (using a single sample at a time) is not that popular either. Mini-Batch is the most used.
There's a lot of research around the optimal batch size. Your problem differs from any other problem, but empirical evidence suggests that smaller batches perform better. (Small as in less than a hundred or so.)
Here is a good paper talking about recommendations for the batch size: https://arxiv.org/abs/1206.5533 They recommend using 32 as a good default value.
I created a notebook to train a neural network in three different ways: 1. Using batch size = 1 2. Using batch size = 32 3. Using batch size = n https://colab.research.google.com/drive/1C_y3Ed43Zgt-xC8GhSnaK133s-2kMDsa?usp=sharing Attached you can see how the accuracy moved in each of these experiments:
A few notes: • Training time: 263s, 21s, and 8s, respectively. • Testing accuracy: 0.62, 0.77, 0.78, respectively. • Attached images show the noise in the testing loss. Noticeable differences!
To run this notebook and get the plots, create a @Cometml account. It's free: https://comet.com/signup?utm_source=svpino&utm_medium=referral&utm_campaign=online_partner_svpino_2022 Then copy your API key and use it when running the notebook.
Every week, I break down machine learning concepts to give you ideas on applying them in real-life situations. Follow me @svpino to ensure you don't miss what's coming next.
Avatar

Santiago

@svpino

I write about Artificial Intelligence and sometimes it's good.

Follow on Twitter