Falsifying Echo Chambers: Reddit’s inept war on bots continues to fail

Posted on 3/30/2018 by with 0 comments

In this piece we

  • Make an ant pun,
  • accuse reddit of employing inept employees or profiting off uninformed traffic,
  • wonder what it’s like in Jogman’s world.

hello please show me to the sugar thank you

Imagine for a moment that you wake up one day to find your apartment infested with ants. There’s no long story with witty shoutouts to the services and feeds we use daily in this intro: You open your eyes one morning and see a thin black line of ants walking up your wall. This is taking a moment for you to process because you’ve never had ants walking up your wall. We’ll assume in this hypothetical that your apartment isn’t spotless, but you know for a fact that at least two other spaces in your building would be much more lucrative targets.

You think to yourself, “This is not the way I wanted to spend my day off,” in the bathroom.

When you lift the toilet paper from the counter, because this is my hypothetical and maybe you don’t put new rolls back on the spool in this timeline, you notice a scattering of ants. You start to notice them everywhere as you moved things around your house: under the toaster, under the milk in the fridge, under things that don’t even make sense.

The ants probably didn’t know it was your day off today, but they picked a poor day to present you with a problem that necessitates an immediate solution.

Because you were going to get spicy noodles. And now you’ve got to deal with this.

 

So you have two options when it comes down to it at this exact moment.

You can run around the house and, every time you see an ant and point it out to yourself, you smash it. The ants keep coming back every time you do this. If not more as the paste of building ant-matter only further alerts more ants that this is a battle theater. So the ants are also now biting you.

You can look at the factors that resulted in ants, their methods of entry, their goal and what they continue to do over and over.

To wrap up a hypothetical situation that we’re all kind of glad didn’t extend into the entirety of this piece: reddit’s current method appears to be the brute force banning of accounts after report and quick analysis.

The reasonable approach would be to extract the account’s information, recognize patterns and create an internal system within the moderation-base to indicate the account is worth keeping an eye on. A problem here is that, while Spez did indicate that a number of accounts had been removed, there was no information regarding what those users were propagating. Presumably because doing that alienates a userbase and eats into those delicious ad impressions.

Provide the full scope of what Russia is intentionally pushing as narrative and you’ve lost vast swaths of your Conservative, Conspiracy, Green Party, Bern, *politics/news groups with just a hint of white nationalism and — most importantly — The Donald. The full scope would show that, like Facebook targeting ads between 12/2015 and election day, this wasn’t a case of one target audience so much as pitting multiple political factions against one another by artificially constructing an echo chamber.

It’s many groups.

And they will profoundly irritated if told they bought into something that was foreign narrative.

Knowing this, Reddit has opted for the Reliable Until it Bites You option of “Please keep reporting bots. We will deal with them.” This gives the illusion of being proactive. Presently, that’s really all Reddit needs to do since, checking traffic metrics, this hasn’t had any real impact on traffic. The Daily Beast did a thing. Wired wrote a story. There were was a follow-up by one or both. Then it just kind of floated back to sea during the next news cycle’s high tide.

The issue here is that Reddit, a user-contribution driven aggregation website, is being absolutely disingenuous when providing statements that this is (paraphrasing) “… a super-organic system that we just can’t quite get a hold on because, you know, it’s tough running this site. But it’s all of our responsibilities to make sure the links we post are truthful…”

Let me be crystal clear with you. As I type this sentence, I’m taking a shot of Travelers Club at 4 a.m. Three years ago, I was doing the same thing writing a series of pieces about Facebook.

If I can see these patterns, which present themselves as numerical data and not just bias with a headline or the subject in content shared, it seems a little weird to suggest a group like Reddit is unable to construct a filter through algorithm that doesn’t impose on the non-mod user. So the answer is either an unwillingness on Reddit’s behalf or the missing factor is Travelers Club whiskey intake.

Yes, there are obviously going to be a lot of complex factors involved in determining if someone is propagating content with the intent of furthering another country’s narrative. Especially, if you’ll read that previous sentence again, that could apply to just about everything shared by anyone ever. To really get into this, let’s hop into Reddit’s Bot of the Now.

Quick disclaimer. The easiest method for me to visually illustrate what’s going on here is to use live accounts on Reddit. Let’s run under the assumption that if I post a user here, especially in these first few 101 pieces, there will be no question if they are a bot or not. This portion will just focus on post-bots. We’re not even getting into accounts that comment. Yet. Mostly because users exist presently, accruing karma on posts, that only post polarizing content.

If Reddit can’t address users that only post foreign narrative, exactly what sort of trust should users have that this very real situation will be addressed with any kind of efficiency? How are we not left to believe that this is an intentional oversight by an entity that makes money based on traffic, not truth?

Because if you can’t handle Reddit’s Bot of the Now: what can they handle?

jogman308

Cake day: Oct. 18, 2016
35.8k link karma
comments: 0.
link posts: over 1,000

When I dropped that disclaimer, I wanted to underscore that we’re going to be starting this slow. This account has almost 40k link karma. No comments. Two years. I’m not trying to PropOrNot McCarthy this series. It’s an account that’s been active for a little under two years and it’s pretty suspicious. That being said, we are about to get into some bumpy territory.

 

Andrew Aaron Weisburd did an exceptional quick post on referral networks. I’m going to post all of that here with attribution and link. And a graphic that hopefully won’t feel as adapted to a purpose as PropOrNot.

Weisburd:

In webspeak, a referrer is a site that refers its readers to another site via hyperlinks. The posting of those links may be a conscious act on the part of a site’s operators, or it may be the work of a contributing reader, and as such may not constitute an endorsement. That said, the study of referrers can reveal the pathways along which information of all sorts moves. In the present instance, I made use of the free referrer data provided by similarweb.com. For any given site SimilarWeb reveals what they believe to be the top five referrers to the site, as well as the top five sites the target refers their own readers to. In the graph below, the arrows indicate the direction of the referral. The snowball method was used to identify new sites to add to the data until I hit the point of diminishing returns. Regarding disinformation, keep in mind that this is a general characterization, and refers to info that is not necessarily fake, but more likely insanely skewed in some direction not necessarily supported by what facts may exist.

Here’s a graphic that goes with that for ZeroHedge.

What’s problematic about accounts like Jogman is that the content shared specifically focuses on both websites using the same sources as reference — and those sites using each other as reference.

So you’re saying Russians write ZeroHedge and Daily Caller?

Cynically, I’d make a meme if you asked me that via IM that low-key agreed. But since a neural network will be working on that propagation model for a while before I can say yes — no, that’s not what I’m saying.

What I’m saying is that a user is posting, ruthlessly. Over their last 1,000 posts, they’ve focused on:

“So what,” the hypothetical reader says in a poor form of narrative control, “maybe they’re just a bot that posts links to right-wing content.”

A couple of problems with that.

I’ve created bots before. The intent is to create a system that outputs in a way that’s seemingly organic. But the links posted in these situations aren’t driving clicks to a proxy-registered website that appeared two years ago. There’s no real overlap in Ad ID’s.

Problem one: If you created a bot that posted on reddit — and the bot isn’t posting links to locations where you are making money via ad impressions: what are you making money on?

The links follow the same circle of LewDailyZeroPhi over and over but, initially I ran into this account while collecting data on those that posted one of around 100 domains to reddit that were absolutely CA/RU sites. Again. This is a multi-part series. I have a lot to explain but I’ve been conveniently placed in FB jail for the foreseeable future.

Post times.

What time something gets posted is a funny thing. 

Especially if graphed out. The above graph is in EST, by the way and obviously a cap from Snoo. Which, bless Snoo. You’ve helped me visualize a lot of things for this section.

Here’s the thing about programming bots to presumably post links that appear to be in an organic action of sharing: If you’re programming something like that, you already know what time they should be posting for the best movement of your link. How slow it is between Saturday midday and Sunday until 7 p.m. EST. You schedule your content to move when content moves and then you tremor content an hour and half later. Like, maybe you folks have a different method: but mine is literal premeditated murder to the top slot of What’s Hot.

This account isn’t posting in set intervals. This account isn’t posting throughout the day on a variety of subs — or even just the top 10. Flipping through their post history, it’s obvious that stories from profoundly bias sources is being sorted into a location it’ll grow the best.

So what does a bot look like?

What you’d expect. Here’s u/RavBot.

That bot obviously runs at specific times. Early EST isn’t so much an issue but its pattern is so pronounced that it’s obvious that every four hours: It runs a process.

What about bots that are almost always perpetually active?

Let’s take a look at Gfycat_Details_Fixer. 

That’s an open-listen bot: Monitors stream, posts when criteria are met.

Jogman is a work schedule, though. Jogman is a human.

Here’s a question: find a time Jogman posted at 4 a.m. in contrast to the above two examples. There are a lot of reasons I can postulate there. My first guess would be transit but, without anything to back that up: this account does not post in programmed time frames.

It also does not post indiscriminately.

And never once has it posted between midnight and 4:49 a.m.

If the account does not run a global process on a specific hour. But stops at a specific time every night. And doesn’t post all the time: someone is doing what I’d do with a bot. They are intentionally posting self-referencing and bias narrative to target marketing until it works.

Either that, or they work daily for the IRA.

And I wish I could say that were the case. But someone just programmed a post-bot for the exact right times with variance.

Here’s the profile.

Here’s their Twitter that is almost the exact same.

It isn’t a secret what days and what times you need to post on reddit and on what subs. Reddit understands this. Much like they understand a few million users can make a seemingly organic system into a moon-controlled tied.

Wrapping back around:

Are we to believe that Reddit’s team of developers is unable to address issues like this? Issues that are easily observable because they are still happening and direct monitoring can be applied?

This one time, Russia was pushing a newscycle story of “OH JESUS THE MUSLUMZ ARE GETTING THESE WARHEADS HALP” and they tried to make it move on Twitter. Luckily for us as a country, our Muslim hashtags were fading from relevancy. So a Russian quagmire of fake posters tried to get the story moving. Here’s what that looked like after 1.5h. That should scare the shit out of you.

That’s two absolutely false stories from RT and SputNews activating reserves of bots that normally just RT. In an attempt to start something. It’s successful much of the time; however, this one specific case where it wasn’t is interesting since — lacking US Usefuls — this is natural form propagation.

We were all lied to. And we are still being lied to.

By our hosts. By our government. But not in that weird Alex Jones way.

I’ve been researching this for almost four years as a phenomena.
Tomorrow we’re going to go a little deeper.

Like this? Share it. 

Venmo/PayPal embeds are for conclusions; not openings.
And I’m not going to pretend anyone will read this series hard enough to contribute.

Thank you, Brian.

  • addendum: 
  • I owe considerable time on an AWS, the ability to have met an amazing number of debt obligations for the month of May, helping me realize that $300/mo isn’t living wage because if you just secure server time for $150 — the fact that actual things (bird; vet; consolidated debt and i’m paying it; food; car; rent) can never be met with your current pay? Helpful. I didn’t do anything dumb, and I’m wearing contacts. Realizing I make 300/w and your generosity let me know I should leave. 

Leave your 2 cents!

« Back home
Who is Donald Trump? Contact Donate Store About