Craft and publish engaging content in an app built for creators.
NEW
Publish anywhere
Post on LinkedIn, Threads, & Mastodon at the same time, in one click.
Make it punchier 👊
Typefully
@typefully
We're launching a Command Bar today with great commands and features.
AI ideas and rewrites
Get suggestions, tweet ideas, and rewrites powered by AI.
Turn your tweets & threads into a social blog
Give your content new life with our beautiful, sharable pages. Make it go viral on other platforms too.
+14
Followers
Powerful analytics to grow faster
Easily track your engagement analytics to improve your content and grow faster.
Build in public
Share a recent learning with your followers.
Create engagement
Pose a thought-provoking question.
Never run out of ideas
Get prompts and ideas whenever you write - with examples of popular tweets.
@aaditsh
I think this thread hook could be improved.
@frankdilo
On it 🔥
Share drafts & leave comments
Write with your teammates and get feedback with comments.
NEW
Easlo
@heyeaslo
Reply with "Notion" to get early access to my new template.
Jaga
@kandros5591
Notion 🙏
DM Sent
Create giveaways with Auto-DMs
Send DMs automatically based on engagement with your tweets.
And much more:
Auto-Split Text in Posts
Thread Finisher
Tweet Numbering
Pin Drafts
Connect Multiple Accounts
Automatic Backups
Dark Mode
Keyboard Shortcuts
Creators love Typefully
180,000+ creators and teams chose Typefully to curate their Twitter presence.
Marc Köhlbrugge@marckohlbrugge
Tweeting more with @typefully these days.
🙈 Distraction-free
✍️ Write-only Twitter
🧵 Effortless threads
📈 Actionable metrics
I recommend giving it a shot.
Jurre Houtkamp@jurrehoutkamp
Typefully is fantastic and way too cheap for what you get.
We’ve tried many alternatives at @framer but nothing beats it. If you’re still tweeting from Twitter you’re wasting time.
DHH@dhh
This is my new go-to writing environment for Twitter threads.
They've built something wonderfully simple and distraction free with Typefully 😍
Santiago@svpino
For 24 months, I tried almost a dozen Twitter scheduling tools.
Then I found @typefully, and I've been using it for seven months straight.
When it comes down to the experience of scheduling and long-form content writing, Typefully is in a league of its own.
Luca Rossi ꩜@lucaronin
After trying literally all the major Twitter scheduling tools, I settled with @typefully.
Killer feature to me is the native image editor — unique and super useful 🙏
Visual Theory@visualtheory_
Really impressed by the way @typefully has simplified my Twitter writing + scheduling/publishing experience.
Beautiful user experience.
0 friction.
Simplicity is the ultimate sophistication.
Queue your content in seconds
Write, schedule and boost your tweets - with no need for extra apps.
Schedule with one click
Queue your post with a single click - or pick a time manually.
Pick the perfect time
Time each post to perfection with Typefully's performance analytics.
Boost your content
Retweet and plug your posts for automated engagement.
Start creating a content queue.
Write once, publish everywhere
We natively support multiple platforms, so that you can expand your reach easily.
Check the analytics that matter
Build your audience with insights that make sense.
Writing prompts & personalized post ideas
Break through writer's block with great ideas and suggestions.
Never run out of ideas
Enjoy daily prompts and ideas to inspire your writing.
Use AI for personalized suggestions
Get inspiration from ideas based on your own past tweets.
Flick through topics
Or skim through curated collections of trending tweets for each topic.
Write, edit, and track tweets together
Write and publish with your teammates and friends.
Share your drafts
Brainstorm and bounce ideas with your teammates.
NEW
@aaditsh
I think this thread hook could be improved.
@frankdilo
On it 🔥
Add comments
Get feedback from coworkers before you hit publish.
Read, Write, Publish
Read, WriteRead
Control user access
Decide who can view, edit, or publish your drafts.
Okay, time to live tweet my thoughts on @stanfordnlp@StanfordAILab's "Workshop on Foundation Models." A long thread.
First and foremost: please never use the phrase "foundational models" every again. It's a garbage name that people like @mmitchell_ai@emilymbender@mer__edith have criticized at length. I'll go find some of their comments and link to them later, but the short version is:
@mmitchell_ai@emilymbender@mer__edith 1. There is very little intellectually "foundational" about these models
2. It's not at all clear that GPT-3 and CLIP-DALL-E are the same kind of thing
3. The motivation for this relabeling appears to be entirely about political control over language
I missed @percyliang's intro talk, so I'll start with @jackclarkSF's
1. Jack says that everyone training or trying to train 100B+ language models are companies. This omits #EleutherAI and @BigscienceW from the narrative (note both groups were also excluded from the workshop)
2. Jack says that these models will obviously do good, but his examples are highly suspicious. He specifically positively raises examples like AI therapy chatbots which mental health experts are virtually universally against @MentalHealthAm
@MentalHealthAm 3. Jack is dead right about recommendation algorithms and their insidious impact on our behaviors.
4. I'm surprised by how critical he is willing to be about capital. Distinguishing between corporate and capital interests is an important and meaningful thing to do.
@MentalHealthAm We cannot forget that there are many people who are personally wealthy enough to fund the training of GPT-3. The CEO of any Fortune 500 company can do it without any meaningful impact on their lives.
If anyone reading this has 5M to spare, my DMs are open! I can train and release a completely open and free 200B language model with 5M in funding.
It's really funny to hear people talk about how GPT-3 is absurdly expensive. @mmitchell_ai compared it to @CERN a couple days ago.
@mmitchell_ai@CERN I fear that people without experience with large scale science are missing important context of scale. I am employed by a US Government contracting firm. We have a saying: a billion here, a billion there, and pretty soon you're talking about real money.
@mmitchell_ai@CERN My company has received over 100x the amount of money it would cost to train GPT-3 for AI research. The USG doesn't want a freely available GPT-3. If they did, it would exist.
Reminder that the US military spends more money in AI research than the entire private sector combined.
@mmitchell_ai@CERN@huggingface When I talk about making a 100B model or a 200B model "freely available" what I really mean is that the weights are published online. For you to actually use it, you'd need to take it to a cloud provider and pay them to use it. But at least @Microsoft wouldn't have a monopoly
Jumping ahead to the current panel because I have strong emotions about this:
Making these models publicly available is a prerequisite for auditing them. The current paradigm of private models is fundamentally at odds with auditing. And this is deliberate!
@mmitchell_ai@timnitGebru Carlini's paper was about measuring memorization of training data. Forget about ethics and policy work for a second: if this kind of foundational research on how models work is censored, we have no hope of building a real understanding of how these models work let alone
meaningfully evaluate whether their use is a good idea. You can't talk about the ethics of technology if you don't know how the technology functions. And @GoogleAI doesn't want you to know how it functions.
I missed the name of the woman who is currently speaking, but she is 100% spot on about these models already being deployed in ways that harm people and that people don't want.
These models are already being used to monitor political dissidents.
These models are already being used to spy on people.
These models are already being used to send people to jail.
If your understanding of the harms that these models do doesn't start with "these models are currently being used to violate human rights" and go downhill from there, you're quite poorly calibrated.
Police use shotspotter AIs put people in jail. But these algorithms don't work, or worse are openly fraudulent. In one example, an employee manually reclassified and then changed the recorded location of a “gunshot” when contacted by the police and asked to look for it.
Someone just said "who would have thought about these things 12 years ago."
I hate to break it to you, but computer security, social impacts of technology, and ethics were not invented in 2010. LOTS of people were thinking about this 12 years ago, and even many years prior
I really need to write a blog post soon about my Rule 1 of technology: if it's the primary activity of the villian in a sci-fi book or movie, don't do it.
twitter.com/acidflask/status/1429883659058913288
@mtlaiethics@percyliang@defcon@aivillage_dc@Twitter@ErickGalinkin@ruchowdh The panelists are correct that evaluation suits are insufficient to understand real-world deployment contexts. In other fields we do studies of deployed systems and publish them. Not so in ML, because companies won't let you. I've tried at my company and I know people who have
and would probably specifically disown most of it. Booz does a lot of classified work, and nothing I say should be considered to be a comment on any non-public programs at Booz or the US government, be it classified or otherwise.
Basically everyone who work in cutting edge AI research is muzzled in one way or another, and I am no different. This is one of the major reasons to democratize public discussion IMO... I can say things about Stanford and Google that employees there can't, and other people can
say things about my employer that I can't. We must wear these muzzles to get access to technology and ideas that enable us to function as researchers. That's how the ML world works right now.
Anyways, back to the panel:
You cannot rely on the US government to regulate AI technology. For over a decade the reality of the world has been that people who are rich are above the law. You set aside funds to pay token judgements, and then you go profit off violating US Law
If the US government is going to regulate AI technology in more than in name, the very first thing that has to happen is that the *minimum* judgement against a company for violating US regulatory law is the total profits accrued by that company. And then you need to pile punitive
@wjscheirer It's interesting to hear alignment finally come out. I was wondering about this. I'm actually currently writing a position paper on the behalf of #EleutherAI about our attitudes about the release of technology.
I am hoping to have that out by the end of the month, though my migraines have already caused significant delays.
Anyways, I really want to see more communication and collaboration between what I'll call the "AI ethics" and the "AI alignment" communities. Neither community
is that positively disposed towards the other in my experience, and I think that's a major shame. This is something I am trying to work on within #EleutherAI, which is broadly speaking aligned to the alignment community rather than the ethics one.
It's cute to listen to people talk about "countries that don't have the same attitudes towards human rights" given the widespread abuse of human rights done by the US government.
Going back to @shotspotter, no independent audit or study has ever supported it's claims. A MacArthur Center for Justice study found 86% of gunshot reports resulted in no evidence of a crime: endpolicesurveillance.com/
@shotspotter And yet it's deployed widespread in the US. My mom is a public defender in DC, and she says the majority of cases she sees have "evidence" provided by @shotspotter
This is Merc. He’s really cute, but he doesn’t seem to want to let me tweet
I'm getting a lot of messages and replies. I'm going to try to get to everyone, but it will take a while. I'm not ignoring you, and don't hesitate to nudge me again.
@shotspotter I like the question about how Yejin Choi got youtube data in violation of youtube's ToS. I'm really impressed that she was willing to straight up just answer that they violated Google's ToS. It's a lot more common to try to hide this fact behind weasel words.
@shotspotter In case anyone's curious, she's spot on about it being legal. Something widely ignored in AI research (including in the AI Ethics literature) is that ToS are basically pieces of paper. They are not legally binding, researchers *probably* have a pass to ignore them in the US
@shotspotter due to free use even when they do express actual legal rights, and website owners typically fundamentally don't care.
Researchers are also very quick to ignore inconvenient truths about their data. People knew for years that ImageNet contained child pornography. I knew this in 2016 and I wasn't even an AI researcher in 2016. But until it was a liability to the authors, it was fine.
A current example is the "OpenSubtitles" dataset. The OpenSubtitles dataset claims to be a public dataset of scripts of movies. Despite what the paper and the original website say, it is in no way in compliance with US copyright law.
As far as I know, the Pile is the only paper that uses the dataset and admits that the website is a massive copyright violation. This dataset was actually the motivation for section 6.5 in the Pile paper, where we draw three senses in which
arxiv.org/abs/2101.00027
a dataset is (ethically) available for public use. As I said, it's probably fair use to use the data so legally it's not at issue (in the US at least).
I would love to learn if I'm wrong about this. @mmitchell_ai@haydenfield@mer__edith@emilymbender do any of y'all know a paper or blog post that openly discusses the fact that some dataset authors lie about copyright compliance? With OpenSubtitles or anything else as examples?