Craft and publish engaging content in an app built for creators.
NEW
Publish anywhere
Post on LinkedIn & Mastodon too. More platforms coming soon.
Make it punchier š
Typefully
@typefully
We're launching a Command Bar today with great commands and features.
AI ideas and rewrites
Get suggestions, tweet ideas, and rewrites powered by AI.
Turn your tweets & threads into a social blog
Give your content new life with our beautiful, sharable pages. Make it go viral on other platforms too.
+14
Followers
Powerful analytics to grow faster
Easily track your engagement analytics to improve your content and grow faster.
Build in public
Share a recent learning with your followers.
Create engagement
Pose a thought-provoking question.
Never run out of ideas
Get prompts and ideas whenever you write - with examples of popular tweets.
@aaditsh
I think this thread hook could be improved.
@frankdilo
On it š„
Share drafts & leave comments
Write with your teammates and get feedback with comments.
NEW
Easlo
@heyeaslo
Reply with "Notion" to get early access to my new template.
Jaga
@kandros5591
Notion š
DM Sent
Create giveaways with Auto-DMs
Send DMs automatically based on engagement with your tweets.
And much more:
Auto-Split Text in Posts
Thread Finisher
Tweet Numbering
Pin Drafts
Connect Multiple Accounts
Automatic Backups
Dark Mode
Keyboard Shortcuts
Creators loveĀ Typefully
150,000+ creators andĀ teams chose Typefully to curate their Twitter presence.
Marc Kƶhlbrugge@marckohlbrugge
Tweeting more with @typefully these days.
š Distraction-free
āļø Write-only Twitter
š§µ Effortless threads
š Actionable metrics
I recommend giving it a shot.
Jurre Houtkamp@jurrehoutkamp
Typefully is fantastic and way too cheap for what you get.
Weāve tried many alternatives at @framer but nothing beats it. If youāre still tweeting from Twitter youāre wasting time.
DHH@dhh
This is my new go-to writing environment for Twitter threads.
They've built something wonderfully simple and distraction free with Typefully š
Santiago@svpino
For 24 months, I tried almost a dozen Twitter scheduling tools.
Then I found @typefully, and I've been using it for seven months straight.
When it comes down to the experience of scheduling and long-form content writing, Typefully is in a league of its own.
After trying literally all the major Twitter scheduling tools, I settled with @typefully.
Killer feature to me is the native image editor ā unique and super useful š
Visual Theory@visualtheory_
Really impressed by the way @typefully has simplified my Twitter writing + scheduling/publishing experience.
Beautiful user experience.
0 friction.
Simplicity is the ultimate sophistication.
Queue your content inĀ seconds
Write, schedule and boost your tweets - withĀ noĀ need forĀ extra apps.
Schedule with one click
Queue your post with a single click - or pick a time manually.
Pick the perfect time
Time each post to perfection with Typefully's performance analytics.
Boost your content
Retweet and plug your posts for automated engagement.
Start creating a content queue.
Write once, publish everywhere
We natively support multiple platforms, so that you can expand your reach easily.
Check the analytics thatĀ matter
Build your audience with insights that makeĀ sense.
Writing prompts & personalized postĀ ideas
Break through writer's block with great ideas and suggestions.
Never run out of ideas
Enjoy daily prompts and ideas to inspire your writing.
Use AI for personalized suggestions
Get inspiration from ideas based on your own past tweets.
Flick through topics
Or skim through curated collections of trending tweets for each topic.
Write, edit, and track tweetsĀ together
Write and publish with your teammates andĀ friends.
Share your drafts
Brainstorm and bounce ideas with your teammates.
NEW
@aaditsh
I think this thread hook could be improved.
@frankdilo
On it š„
Add comments
Get feedback from coworkers before you hit publish.
Read, Write, Publish
Read, WriteRead
Control user access
Decide who can view, edit, or publish your drafts.
1/15...Data Pipelines are traditionally built with ETL design. But with rising of the modern data stack, new data pipelines are moving towards ELT design.
Let's understand the E, L & T steps, the 2 designs and everything in between: š§µ
2/15...ā Extract:
ā¢ Fetch data from different data sources.
ā¢ It can be from APIs, sensors, events, databases, raw files or any other source that generates data.
ā¢ Can be done in batches/streams.
3/15...ā Transform:
ā¢ Process the extracted data from different sources and make it ready for analytics.
ā¢ The goal is to clean the data of any junk/invalid values and to maintain data uniformity.
ā¢ Some common transformation models are:
4/15...a. Data Value Unification:
Transform data values from different sources to a single set of values. Example: One source can send city names as codes and the other can send them as the full name of the city.
5/15...b. Data Type and Size Unification:
Transform data types coming from different sources into one data type. Example: Age can come as an int from one source and a string from the other source.
6/15...c. Deduplication:
Remove duplicates from the data coming in from multiple sources.
d. Dropping Columns (Vertical Slicing) :
Removing columns from the data if not required.
7/15...e. Value Based Row-Filtering (Horizontal Scaling):
Based on some business rules, we can filter rows of data for some column values.
f. Correcting known errors:
Correcting known issues and inconsistencies in data.
8/15...ā Load:
ā¢ Loading the data in a data warehouse or a data mart where it can be used for analytics/BI/ML by the end users.
9/15...I hope you understood the 3 steps in a data pipeline. Let's look at the ELT design pattern now.
10/15...As explained in the image above:
Data is extracted and transformations are done on top of it. Finally, the transformed data is loaded into the final destination.
But a few things to note here:
11/15...1ļøā£ Data is stored in a "Staging" layer before any transformations are done on it.
The staging layer can be persistent (new data is added here as and when received) or non-persistent (old data is removed and new data is added every time transformations are done)
12/15...2ļøā£ Transformation and loading need to be performed in the same run as the data is stored in a temp staging layer after extraction.
Moving towards ELT design:
13/15...Here, we load data after extraction and perform transformations on it.
1ļøā£ There is no staging layer here as the data is directly loaded to the "user-access" layer. (where end-users can access the data)
This is where data lakes can be used for loading the extracted data
14/15...2ļøā£ The loading and transformations can be done independently of each other in separate runs.
As with the maturity of the modern data stack, we are seeing a rise in specialized tools for each of the 3 steps in a pipeline.
Ex: Fivetran, Stitch for E and L; dbt for T, etc.
15/15...PS:
Staging Layer is just a name given to a temp warehouse or a data mart that stores intermediate data.
User-Access Layer is a lake/warehouse/mart from where end-users can access the data.