Craft and publish engaging content in an app built for creators.
NEW
Publish anywhere
Post on LinkedIn, Threads, & Mastodon at the same time, in one click.
Make it punchier 👊
Typefully
@typefully
We're launching a Command Bar today with great commands and features.
AI ideas and rewrites
Get suggestions, tweet ideas, and rewrites powered by AI.
Turn your tweets & threads into a social blog
Give your content new life with our beautiful, sharable pages. Make it go viral on other platforms too.
+14
Followers
Powerful analytics to grow faster
Easily track your engagement analytics to improve your content and grow faster.
Build in public
Share a recent learning with your followers.
Create engagement
Pose a thought-provoking question.
Never run out of ideas
Get prompts and ideas whenever you write - with examples of popular tweets.
@aaditsh
I think this thread hook could be improved.
@frankdilo
On it 🔥
Share drafts & leave comments
Write with your teammates and get feedback with comments.
NEW
Easlo
@heyeaslo
Reply with "Notion" to get early access to my new template.
Jaga
@kandros5591
Notion 🙏
DM Sent
Create giveaways with Auto-DMs
Send DMs automatically based on engagement with your tweets.
And much more:
Auto-Split Text in Posts
Thread Finisher
Tweet Numbering
Pin Drafts
Connect Multiple Accounts
Automatic Backups
Dark Mode
Keyboard Shortcuts
Creators love Typefully
180,000+ creators and teams chose Typefully to curate their Twitter presence.
Marc Köhlbrugge@marckohlbrugge
Tweeting more with @typefully these days.
🙈 Distraction-free
✍️ Write-only Twitter
🧵 Effortless threads
📈 Actionable metrics
I recommend giving it a shot.
Jurre Houtkamp@jurrehoutkamp
Typefully is fantastic and way too cheap for what you get.
We’ve tried many alternatives at @framer but nothing beats it. If you’re still tweeting from Twitter you’re wasting time.
DHH@dhh
This is my new go-to writing environment for Twitter threads.
They've built something wonderfully simple and distraction free with Typefully 😍
Santiago@svpino
For 24 months, I tried almost a dozen Twitter scheduling tools.
Then I found @typefully, and I've been using it for seven months straight.
When it comes down to the experience of scheduling and long-form content writing, Typefully is in a league of its own.
Luca Rossi ꩜@lucaronin
After trying literally all the major Twitter scheduling tools, I settled with @typefully.
Killer feature to me is the native image editor — unique and super useful 🙏
Visual Theory@visualtheory_
Really impressed by the way @typefully has simplified my Twitter writing + scheduling/publishing experience.
Beautiful user experience.
0 friction.
Simplicity is the ultimate sophistication.
Queue your content in seconds
Write, schedule and boost your tweets - with no need for extra apps.
Schedule with one click
Queue your post with a single click - or pick a time manually.
Pick the perfect time
Time each post to perfection with Typefully's performance analytics.
Boost your content
Retweet and plug your posts for automated engagement.
Start creating a content queue.
Write once, publish everywhere
We natively support multiple platforms, so that you can expand your reach easily.
Check the analytics that matter
Build your audience with insights that make sense.
Writing prompts & personalized post ideas
Break through writer's block with great ideas and suggestions.
Never run out of ideas
Enjoy daily prompts and ideas to inspire your writing.
Use AI for personalized suggestions
Get inspiration from ideas based on your own past tweets.
Flick through topics
Or skim through curated collections of trending tweets for each topic.
Write, edit, and track tweets together
Write and publish with your teammates and friends.
Share your drafts
Brainstorm and bounce ideas with your teammates.
NEW
@aaditsh
I think this thread hook could be improved.
@frankdilo
On it 🔥
Add comments
Get feedback from coworkers before you hit publish.
Read, Write, Publish
Read, WriteRead
Control user access
Decide who can view, edit, or publish your drafts.
gradient descent is a popular optimization algorithm used in machine learning and deep learning at the moment, and its surprisingly easy to understand. its important to get a good grasp on the concepts driving modern tech so that you don't get left in the dust! free knowledge:
FIRST you need to know what the loss function is. a loss function is a method of evaluating how well your model does at predicting the expected outcome. if the model predictions deviate too much from the actual results, the loss function would produce a very large number.
so basically, we want the loss function to produce the smallest number possible. gradient descent is the algorithm that is used to minimize the loss function by finding the most optimal values of the parameters of the function from a large parameter space. bear with me.
example: if we have loss function f(x, y) = x*x + 2y*y, then the optimal values for parameters is x=0 and y=0, because that is where the function is at a minimum. this function is simple, but in higher dimensions it may be incredibly difficult to solve an equation for zero.
approximation by using gradient descent may be much faster than trying to solve the problem by hand or using a computer. okay so what's a gradient? the gradient of a function at any point is the direction of steepest increase of the function. here's a quick exercise:
once the gradient of the function at any point is calculated, the direction of steepest DESCENT of the function at that point can be calculated by multiplying the gradient with -1. eezypeezy.
-1 * gradient = direction of steepest descent
now imagine you're standing at the top of a mountain and you want to descend. also assume its EXTREMELY foggy so you can't really see where to go. you only have a tool that tells you the direction of steepest descent (how convenient?!). what do you do?
you'll probably take a step in the direction of steepest descent, use the tool and find the new direction of steepest descent from your new location, take another step in that direction, and keep doing this until you reach the bottom.
in this analogy, you are the algorithm, and the path taken down the mountain represents the sequence of parameter settings that the algorithm will explore. the tool used to measure steepness is differentiation. the direction you choose to travel in aligns with the gradient.
this diagram represents how parameter θ gets updated with the value of gradient in the opposite direction while taking small steps. this formula tells us our next position. in order to estimate the optimal parameter values, this process is repeated until convergence.
repeating until convergence just means keep going until you reach a minimum, and that's where the algorithm stops. this could be a local or global minimum which can lowkey cause problems but for simplicity i wont get into that right now. (look it up though frfr)
now imagine you have a machine learning problem and want to train your algorithm with gradient descent to minimize your loss function J(w, b) and reach its local minimum by tweaking its parameters (w and b). we know we want to find the values of w and b marked by the red dot.
to start finding the right values, we initialize w and b with some random numbers. gd then starts at that point (somewhere around the top), and it takes one step after another in the steepest downside direction until it reaches the point where the loss function is minimized.
how big the steps are is determined by the learning rate, which decides how slow or fast we will move toward the optimal weights. for gradient descent to reach the local minimum we must set the learning rate to an appropriate value, neither too low nor too high.
if the steps it takes are too big, it may not reach the local minimum because it bounces back and forth between the convex function of gradient descent (left image above). if we set the learning rate to a very small value, the optimization can be super slow. (right image)
alright im tired but those are some basics. i hope this helps people. its a pretty cool algorithm and it really shouldn't intimidate you at all.