Tech News Ex-Google employee: “YouTube recommendations are toxic”

Tech News Ex-Google employee: “YouTube recommendations are toxic”

by Tech News
0 comments 66 views
A+A-
Reset

Tech News
Where else could you go for you daily dose of how-tos, like this oneabout draining washing machines, or 15-minute compilations of cats vomiting? This is what YouTube was made for, and it’s beautiful.
At the same time, as YouTube has become the place for videos on the web, it’s led to a raft of new problems. Content moderation is a constant struggle and YouTube can do better, but there will likely always be some amount of offensive videos that people can seek out. However, the real issue is the videos we don’t seek out: YouTube’s recommendations.

Recommendations are a waste of time
You can see the recommendations in the “Up next” list on the right of the screen and they’ll also play automatically when you’ve got autoplay enabled.
These are the videos you should be wary of, according to Guillaume Chaslot. He’s the founder of a project to demand greater transparency from online platforms called AlgoTransparency, and used to work at Google on YouTube’s recommendation algorithm. He says the motivations behind it are deeply flawed as it isn’t really about what the viewer wants.
Credit: DisinfoLabChaslot speaking at the DisinfoLab Conference in Brussels“It isn’t inherently awful that YouTube uses AI to recommend video for you, because if the AI is well tuned it can help you get what you want. This would be amazing,” Chaslot told TNW. “But the problem is that the AI isn’t built to help you get what you want — it’s built to get you addicted to YouTube. Recommendations were designed to waste your time.”
Chaslot explains that the metric the algorithm uses to determine a ‘successful’ recommendations is watch time. This might be great for a company trying to sell ads, but doesn’t necessarily reflect what the user wants — and has grave side-effects.
Engaging content gets recommended, which is bad
During his talk at the DisinfoLab Conference last month, Chaslot noted that divisive and sensational content is often recommended widely: conspiracy theories, fake news, flat-Earther videos, for example. Basically, the closer it stays the edge of what’s allowed under YouTube’s policy, the more engagement it gets. Google completely disagrees with Chaslot, but we’ll get to that later.
The basic structure of YouTube’s recommendation algorithm might’ve worked fine for its core types of content — like cat videos, gaming, and music. But as YouTube becomes more central in people’s information and news consumption, Chaslot worries recommendations will push people further to extremes — whether they want it or not — just because it’s in YouTube’s interest to keep us watching for as long as possible.
Chaslot’s take on Facebook’s natural engagement pattern: “The best way to use social media is to surf the policy line.”Mark Zuckerberg admitted last year that borderline content was more engaging. Google did not want to answer TNW’s questions as to whether the same was true for YouTube, but the company’s spokesperson said in a discussion at the DisinfoLab conference that the company’s studies showed people actually engaged more with quality content. Chaslot says this is something the big tech companies will have to debate between themselves, but based on his own experience, he’s more inclined to believe Zuckerberg at this point.
“We’ve got to realize that YouTube recommendations are toxic and it perverts civic discussion,” says Chaslot. “Right now the incentive is to create this type of borderline content that’s very engaging, but not forbidden.” Basically, the more outlandish content you make, the more likely it’ll keep people watching, which in turn will make it more likely to be recommended by the algorithm — which results in greater revenue for the creator, and for YouTube.
But what about actual, concrete examples of problematic recommendations?
When recommendations go wrong
In Chaslot’s mind, it should be enough to point out that the algorithm’s incentives are completely broken (i.e. watch time doesn’t equal quality) as an example of why it’s bad for us as a society. But to actually show its effects, he made the AlgoTransparency tool after he left Google. The tool is meant to give people a better overview of what’s actually being recommended on YouTube.
Basically, it tries to find out which videos are shared by most channels to provide an overview you can’t get through your personal browsing. Chaslot points out that most often, the top recommended videos are innocuous, but every now and again, problematic videos pop up.

This video funded by the Russian government was recommended more than half a million times from more than 236 different channels.https://t.co/aRNUx2WIOm
2/
— Guillaume Chaslot (@gchaslot) April 26, 2019

When the Mueller report detailing whether there was any collusion between Russia and Donald Trump’s presidential campaign was released, Chaslot noticed that the analysis recommended from the most channels was a video from RT — a state-sponsored Russian propaganda outlet.
That means that if Chaslot is correct, YouTube’s algorithm amplified a video explaining the finding on possible Russian collusion made by… Russia. The video upholds what could be considered a Kremlin-friendly narrative and slams mainstream media. Naturally, Chaslot’s claim caught the attention of the media and was covered widely.
Chaslot says that while other Mueller-related videos got more total recommendations, the RT video stood out because it was recommended like crazy for two days and then nothing — despite having relatively few views.
“It was really strange to see this amount of channels that recommended this video, compared to how many views it had,” says Chaslot “The weird thing is that nobody really understands why this happened.”
Now it’s important to get Google’s side of things. Google has completely disowned AlgoTransparency’s methodology (which you can find here) and told TNW it doesn’t accurately reflect how recommendations work — which are based on surveys, likes, dislikes, and shares.
As with most assertions made by AlgoTransparency, we have been unable to reproduce the results here. We’ve designed our systems to help ensure that content from more authoritative sources is surfaced prominently in search results and watch next recommendations in certain contexts, including when a viewer is watching news related content on YouTube.
Chaslot says that if there are discrepancies with his results, he’d love to know what they are. And the best way to do that, is for Google to simply share which videos YouTube is recommending to people. But the company hasn’t revealed more information on the matter.
Chaslot also points out that it seems his methods manage to highlight similar faults in the algorithm as Google does. Earlier this year, a Google software engineer gave a talk about correcting biases in YouTube and one of the videos used as an example of that had been flagged by AlgoTransparency before. So something about his approach seems to be working, but we won’t know until Google becomes more transparent about recommendations.
Credit: AlgoTransparencyFrom AlgoTransparency’s website, showing part of its methodologyWhat should we do about it?
YouTube doesn’t really provide any options now for users to control the recommendations they receive. Sure you can block some channels, but Chaslot points out that the algorithm might still push you towards similar channels as you’ve already “shown interest” in this type of content. So what can you do?
“The best short-term solution is to simply delete the recommendation function. I really don’t think it’s useful at all to the user,” Chaslot explains. “If YouTube wants to keep recommendations, it could stick to the curated ones done via email — where humans make sure nothing crazy gets on there — or just make them stick to channels you’ve subscribed to.”
Chaslot also acknowledges that most people — himself included — are too curious not to click borderline content, so he uses a Chrome extension called Nudge that removes addictive online features like Facebook’s News feed and YouTube recommendations. “I still click on stupid thing when I see them, so the best thing to is to just remove them.”
Credit: NudgeNudge removes all the addictive stuff from social media sites… which is pretty much everything.All of this is just to treat the symptoms, not the cause. Chaslot believes the real focus needs to be on long term solutions. At the moment, users are fighting against supercomputers to try to protect their free will, but it’s a losing battle with the current tools.
That’s why Chaslot is convinced the only way forward is to provide proper transparency and give users real control: “In the long term, we need people to be in control of the AI, instead of AI controlling the users.”

Read next:

Europol is developing a ‘game’ to teach officers how to trace cryptocurrency

You may also like

Leave a Comment