The Philosophy of Nate Repair, or replace? It may not be your choice. <p>If you’ve bought a new electronic device – phone, laptop, and so on – in the last several years, you might have noticed a few changes from older devices. Yes, of course the software looks newer and the device is probably smaller, but there’s another change too: it resists being taken apart or repaired. Maybe the screws are a weirder shape, or you can’t take the battery out. Or, maybe you’ve tried and after turning the device back on it refuses to work or gives you error messages that make it hard to use. Instead, you might have had to take it back to the company or simply buy a new one, probably costing more money.</p> <p>That’s not an accident – companies would prefer that you pay them, instead of letting you pop in a new battery yourself to get a few more years out of a device that otherwise works fine. And, it’s not out of a concern for safety because products can be designed to be easy and safe to repair. Take, for example, <a href="">Framework</a>, which makes laptops where every component is labeled, easy to replace, and you only need one screwdriver because everything on the laptop uses the same screws. Framework even has suggestions for using its parts to build things other than laptops, should someone be looking to invent something new.</p> <p>And some companies are very serious about making it harder to fix your stuff. Apple <a href="">invented its own screws</a> to stop its customers from opening devices without special tools, requiring taking devices to an Apple authorized repairer. John Deere refuses to provide access to software, making it impossible to adjust or repair farm equipment without paying John Deere, and spent years in legal battles <a href="">trying to stop its equipment from being repaired</a> by owners, citing the possibility of copyright infringement (of both John Deere software, and music). Tesla has <a href="">refused to provide diagnostic tools</a> to buyers or independent shops, requiring Tesla owners to visit Tesla for repairs. And although the companies may repair devices now, they can decide at any time that a device is too old to be repaired, even if all it needs is a new battery.</p> <p>Everything from farm equipment, to cars, to kitchen appliances, are being designed to make them harder to fix, if you can fix them at all. If you own something, you do have the legal right to fix it. The problem has become that companies aren’t providing the tools, parts, or instructions for you to do so (though they exist, and are used by technicians authorized by the company). By doing so, those companies are taking away your ownership of what you bought by forcing you to bring it back to them for repairs, for a price. Even if it isn’t yours to repair, the cost is still yours because it’s reflected in the prices of everything else you might need to buy.</p> <p>The good news is, the tide may be turning. The Right to Repair movement has been pushing companies and legislators to allow devices to be repaired by their owners. As the name suggests, if you own something, you should have the right to fix it. And so far, Right to Repair has gotten some attention. As of the end of 2021, <a href="">25 states in the U.S. have considered right to repair</a> laws, which would require companies to provide parts and access for those who want to repair their devices. Companies, too, have started to notice. As of 2022, Apple offers a repair kit for its devices (although the kit is <a href="">clearly designed to discourage repairs</a>). And as of just this month, January 2023, <a href="">John Deere agreed</a> to give its customers tools and information to repair their equipment.</p> <p>There’s still <a href="">a ways to go</a>, however. New York State <a href="">enacted a right to repair law</a> – but governor Kathy Hochul added loopholes and exceptions that limit how effective it will be. Apple’s repair kit appears to leave much to be desired. And <a href="">there are other factors</a> that we may not fully see the impact of until right to repair becomes standard, such as the impact on prices and the secondhand device market. Even so, right to repair has the potential to save us money, help us make less waste, and in general, allow us to actually and more fully own the things we buy – even if we choose not to repair them.</p> Fri, 20 Jan 2023 00:00:00 -0500 Digital Native Does Not Mean Digitally Literate <p>For many, there is a difference between being able to drive a car and being able to repair a car. Just because you can drive, doesn’t mean you know how (or maybe just don’t want) to work on your car yourself. The digital world is the same way–by design, our devices are generally easy to use but less easy to deeply understand. Just because you can use a device, doesn’t mean you’re proficient in digital literacy. Digital literacy <a href="">extends beyond the ability to use a device</a> to a much wider array of skills including evaluating the trustworthiness of information. It encompasses such a broad range of skills that some experts prefer to list specific skills rather than use the term “digital literacy.”</p> <p>There’s a common assumption that a lack of digital literacy skills is more likely in people who are older, who may have adopted tech later in life. In some regards this can be true because there is a set of baseline skills needed to operate technology, and features like gesture control and swiping may not be discoverable to someone new to a touchscreen. But, some research has shown that digital literacy skills are actually worse by a considerable margin for individuals who are digital natives–that is, people who have never known a world without computers, the internet, and smartphones. Technology has gotten easier to use, but better ease of use and accessibility–which are good things–may not foster the level of digital literacy needed to use technology effectively.</p> <p>In a 2019 study by Pew Research, most U.S. adults surveyed <a href="">could correctly answer less than half of the questions on a digital literacy quiz</a>, and did worse overall with privacy and digital security questions. The question most participants did the worst with, and which is not particularly relevant to digital literacy, was to identify a photo of Twitter’s former CEO Jack Dorsey. Other questions were about more relevant information. Results from the International Computer and Information Literacy 2018 study which surveyed eighth graders were worse–<a href="">about 2% of the participants scored at the skill level “digital native” implies, and only about 19% were able to use computers independently</a> for gathering and managing information. Even academic high achievers <a href="">appear to be lacking these skills</a>.</p> <p>The big concern with these results is that digital literacy is very important for being able to be safe in the digital world. Concern about scams, online censorship, propaganda, misinformation, and disinformation growing is rightfully growing, and <a href="">recent research has shown</a> that a person’s level of digital literacy is associated with their ability to discern fake content. Lower digital literacy corresponded with less ability to figure out if a headline was accurate. Other data <a href="">showed a 156% increase</a> in the number of people under 20 falling for online scams between 2017 and 2020. Digital native clearly does not mean digitally literate. And in the difference is a huge area for individuals to lose their money, to lose their credibility, and to be at risk of recruitment by foreign or extremist groups.</p> <p>But, the good news is that digital literacy is a teachable skill. It does not mean learning how to code. Building software is a set of fairly specific skills and can be an element of digital literacy, but it doesn’t teach how to evaluate headlines or avoid scams. <a href="">More important</a> are skills like media literacy, basic online self-defense such as the usage of password managers, 2FA/MFA options, and when and how to give personal information out over a trusted channel. It isn’t easy–<a href="">just this week a scam prevention expert fell for a scam</a>–but it is possible. There are <a href="">real improvements shown</a> by people who are taught digital literacy skills. We need to change our assumptions about digital literacy and focus on teaching digital literacy skills that matter, instead of just trying to teach kids how swipe and how to code.</p> Sat, 02 Apr 2022 00:00:00 -0400 Can the President of the United States be Deplatformed? <p>Deplatforming, or taking away the online platform by which someone presents themselves or their brand, is an odd phenomenon on the Internet. It centers around the power of big platforms to kick people off their services, temporarily or permanently, which can cause massive damage to the person or brand being deplatformed. One can find themselves very suddenly losing thousands of fans, followers, or customers with very little recourse. Although deplatforming usually comes up in conversation about big tech like social media services suspending accounts, other types of companies with a huge reach can do it too–such as payment processors <a href="">blocking their cards</a> from being used at particular places. A company might choose to deplatform something for any reason–it’s too controversial, it’s too offensive, illegal, or just against the company’s values.</p> <p>Whatever the reason for a company choosing to silence something, doing so is currently legal and <a href="">does not violate the First Amendment in the U.S.</a>–which protects against the government partaking, not private companies. When a social media service suspends an account, it’s fully within its own First Amendment rights to do so–though the writers of the U.S. Constitution likely never conceived of the size or reach of today’s companies. And although tech platforms adhere to traditions of neutrality and free speech, they can and sometimes do choose to break those traditions. I’ve discussed deplatforming before <a href="">on this blog</a> and <a href="">in my books</a>, usually in reference to companies <a href="">dropping white nationalist neo-Nazi groups from their services</a>. That type of censorship is often considered somewhat acceptable by the online community and some see it as a <a href="">positive assertion of civilized values</a>. However, it’s a murky area of ethics and neutrality–and in past years, even companies that have done it have made statements that boil down to saying some kinds of web companies shouldn’t do it.</p> <p>With politicians joining everyone else on the web, deplatforming and social media rules have become an even more complicated issue. What happens when a world leader or an entire government violates a private platform’s rules? Does removing a government account violate the rights of others on the platform to be able to communicate with their leaders? Can the government follow people? Can it block them? Services were likely not developed with the idea that these were questions that may need to be answered. Nor were laws. But, we’re starting to see efforts to answer them. The United States Court of Appeals for the Second Circuit <a href="">ruled</a> that it is unconstitutional for the president to block people on Twitter, and Twitter has explained that it operates the accounts of political figures differently from others.</p> <p>Throughout his term, President Trump has famously tested the limits of social platform rules. Twitter, his site of choice, has warned him against certain behaviour, has <a href="">marked his tweets</a> with notices, and occasionally <a href="">temporarily preventing him, family, and his campaign from sending Tweets</a>. Other social sites sometimes followed. As the end of Trump’s term approaches, online services are becoming increasingly aggressive about enforcing their rules.</p> <p>And it raises a question: Can the President of the United States be deplatformed?</p> <p>After insurrectionists attacked the U.S. Capitol, tech companies decided the answer was “yes.” As of the writing of this article, Twitter has <a href="">temporarily prevented the president from tweeting</a>. Facebook and Instagram (owned by Facebook) have suspended him (this link goes to Facebook) <a href="">“indefinitely and for at least the next two weeks until the peaceful transfer of power is complete.”</a> Shopify has <a href="">suspended the Trump Campaign and Trump Organization online stores</a>, and Twitch (owned by Amazon) <a href="">suspended his streaming channel</a>. Some said the suspensions didn’t <a href="">go far enough</a> or happen soon enough–and they may have a point because social media has a tendency to magnify problem content. Indeed, much of Mr. Trump’s rhetoric on social media has been a problem–but that’s for a different discussion.</p> <p>It’s a bizarre situation because on one hand, the president has frequently and vocally violated the rules of the services. As private companies, they can choose not to provide a platform. However, given their explanations that they <a href="">allow more leeway for world leaders’ behaviour</a> and then waited to enforce rules, they’re suggesting that they have power over the voice of the U.S. President and others. And so, we’re faced with a serious question of what the limits to the editorial control online services are allowed to exercise should be given their massive audiences. As problematic as President Trump’s Twitter use has been, it has highlighted the fact that there are gaps in social media rules, neutrality, and online freedoms of speech–and that special treatment does exist.</p> <p>We can find a few answers in net neutrality policies and policy such as Section 230. But, we need to hold tech companies more accountable. Even if we agree with some deplatforming decisions and think they are positive assertions of civilized values, tech companies don’t always uphold their traditions of neutrality or always silence the people we hope. If the Internet is intended to be a bastion of free speech, then neutrality is more important than an arbitrary power to deplatform. We need to figure out how to balance promises of free speech with the moderation that is so necessary for online discourse and safety.</p> Thu, 07 Jan 2021 00:00:00 -0500 No Justice, No Peace <p>This month we’ve had four instances of violence against Black people go viral or almost viral. We saw Amy Cooper <a href="">appear to pretend to be under attack</a> because an unarmed person of color politely asked her to follow the law and leash her dog. We saw <a href="">George Floyd</a> die under the knee of an officer who violently arrested him because a store thought a $20 bill looked suspicious. We found out about <a href="">Ahmaud Arbery</a> who was fatally shot in the back while he was out for a run in February, for no other reason than the color of his skin. And, we heard about <a href="">Breonna Taylor</a> who was fatally shot in her apartment earlier this year because police had an outdated address and a no-knock warrant–but never hit the news.</p> <p>These are just the ones we know about because they hit the Internet hard. There have been more. And what’s so saddening is that none of this is surprising. We <em>expect</em> gun violence. We <em>expect</em> violence against people of color. We <em>expect</em> violence against minorities. It’s so common that it barely makes it into the news. But for some reason, we’re reluctant to fix it. Because of racism. Because of ignorance. Because of politicians and people who are afraid of change, and who are afraid of losing their feelings of status or big donors. Because of an imbalance of power and wealth.</p> <p>Some folks look at the riots happening right now and ask “why aren’t they peacefully protesting? Why the violence?” But people have been, and for decades. Peacefully and respectfully. There have been many large, peaceful protests this week. And largely, everyone shrugged. Colin Kaepernick lost his career after a peaceful protest that was <a href="">suggested to him by a retired Army Green Beret</a>. We’re outraged now because we’re literally seeing our cities burn–due to antagonizers <a href="">who aren’t part of the protests</a>–but not when hateful incidents happen every day. I do not in any way condone the violence, but I am unsurprised by it. This has been brewing for a long time, and it’s not just about George Floyd.</p> <p>What these protests show is that it’s not enough for us to just not be racist. We need to actively denounce hate, racism, and white supremacy. Being silent is being complicit. We need to advocate for change–be it reforms, or radical changes like defunding and rebuilding problematic institutions from the ground up. It won’t be easy. It won’t be comfortable. But if we don’t, this will happen again. It isn’t enough to arrest the people looting and inciting violence, or to fire the police officers <a href="">antagonizing</a> <a href="">peaceful</a> <a href="">protesters</a>. Until we fix our institutions the lives of minorities will not improve, protests will continue, tensions will rise, and it will be easy for someone new to incite violence again. If you can’t advocate for change because black lives matter, advocate for change so this doesn’t happen again.</p> <p>Alongside that, we need to make sure we don’t give up our rights and protections out of fear. Many of us, myself included, are watching our cities burn. Looters were arrested in walking distance of where I live. We’re seeing places we know get looted, vandalized, and destroyed. It’s upsetting. In response, some cities are <a href="">extending contact tracing to find people involved in protests</a>. There have been <a href="">military predator drones circling Minneapolis</a>. The U.S. military is reportedly <a href="">ready for a military response in Minnesota</a>, which last happened 1992 in Los Angeles and would be a significant escalation. Reporters have been <a href="">arrested</a> and have been <a href="">shot at</a> by both police and rioters. Although what we’re seeing is scary and upsetting, this is not the answer to it. We have a lot of questions, but we can answer them without compromising our values. We need to be careful where we tread. I’m just as worried about these responses to the riots as I am the riots themselves. Increasing surveillance and cracking down on speech, peaceful protest, and journalism will not make us safer.</p> <p>We will emerge–and we <em>will</em> emerge–stronger and better, if we advocate for change and for protecting our rights. Going into Pride month, we remember that a turning point in the LGBTQ+ movement <a href="">was brutal riots in New York City</a>. The riots now will be a turning point too, regardless of who sparked the violence. It’s up to us what happens next. We need hope, but not <em>only</em> hope. We need change, and we need to hold our leadership, law enforcement, and each other accountable now and always.</p> <p>A lot of people are hurting right now. Be kind. Be respectful. Be a helper if you can. Be safe. And remember, Black Lives Matter.</p> Sat, 30 May 2020 00:00:00 -0400 The EARN IT Bill Puts All Of Us In Danger <p>Technologies such as encryption and common-sense laws such as <a href="">Section 230</a> in the U.S. are a strong foundation for our safety and freedom of speech on the web. If you exist in the modern world, these affect you–using this website, online banking, social media, credit cards, medical care, and so on–these protect you. Strong, proven, and non-backdoored encryption is the only way to keep your electronic data safe (because it <em>is</em> going online, even if <em>you</em> aren’t the one putting it there). Section 230 is a law that says we’re responsible for what we post online, rather than the site we posted it to (with a few exceptions) so sites will be less inclined to censor us. Encryption and Section 230 are why social media as we know it exists today, why we can forward a personal email without legal repercussion, and why many sites we rely on, such as Wikipedia, can operate.</p> <p>But, global free speech and safety are often under attack by politicians who don’t understand–or don’t care about–the repercussions of breaking encryption, backdooring it, or changing who is liable for what gets said online. They propose laws that put our online safety and free speech at risk for child protection or maintaining morality–emotionally charged arguments that are hard to argue against and that are often not true. When passed, they can have wide-reaching repercussions, like <a href="">in Turkey where such laws are used for heavy-handed censorship</a> of safe educational resources like Wikipedia. And, it’s happening again. In January of this year (2020), <a href="">Bloomberg leaked a draft bill</a> that was quietly being circulated by Senators Lindsey Graham and Richard Blumenthal.</p> <p>The bill, called EARN IT, <a href="">would create</a> a “National Commission on Online Child Exploitation Prevention” staffed by the Attorney General, the Chairman of the Federal Trade Commission (FTC), the Secretary of Homeland Security, and 12 other members chosen by Congressional leaders. Its job would be recommending “best practices for providers of interactive computer services regarding the prevention of online child exploitation conduct,” which would be backed by the threat of stripping a site’s Section 230 protections. These recommendations could then be unilaterally overridden by the Attorney General.</p> <p>While perhaps well-intentioned, EARN IT is not the solution to the problem. First, it’s <a href="">extremely and unnecessarily broad</a>, granting too much power to the commission and Attorney General. Under the draft bill, the commission it forms would be able to change and expand the law as it saw fit, as long as it could find an argument that the change might help prevent exploitation. The law could change unpredictably and without enough oversight due to changes in administration or actions from an Attorney General. Attorney General Barr, for example, <a href="">has made his opinions on encryption clear</a> (opinions that ignore experts), and could use the commission to make it impossible to keep our data safe by forcing companies to weaken their security. It also doesn’t provide any meaningful new authority to the government to actually solve problems.</p> <p>The bill could also make it much harder for online innovation. Forcing platforms to screen posts makes it difficult for a new company to get a start on the web. Even fairly large platforms, <a href="">such as Tumblr</a>, have had difficulty here. While this is a problem, and one that the technology industry has been working to solve, EARN IT is doing few favors here–exploitative content is already illegal, and platforms are already obligated by federal law to take it down, report it to law enforcement, and cooperate with investigations. However, it does help big tech companies operate with less threat of competition.</p> <p>Finally, EARN IT lowers the bar for filing a lawsuit that could threaten a service’s Section 230 protections. A plaintiff would not need to prove a company actually knew about exploitative content to win a lawsuit. The bar would be lowered so low that a company simply providing a secure messaging service might be enough for a plaintiff to argue that company acted “recklessly” under EARN IT. Here too, is no new authority. Section 230 already allows the Department of Justice to enforce the law against a provider it believes is breaking the law and knowingly distributing exploitive content.</p> <p>Child protection and law enforcement are deserved topics for public concern, but EARN IT <a href="">doesn’t help those causes</a>. It doesn’t give the government any new powers, it just gives it the ability to strip the protections that keep us safe and our speech free. It’s a direct attack on the web that puts our right to free and private speech, our security, and the innovation we’ve come to expect from the web at risk. Forcing companies to weaken encryption online <a href="">or off</a>, or to censor speech does not make us safer, and does not prevent child exploitation or cybercrime. In fact, it may actually increase cybercrime. Backdoors are sold on the dark web–finding them is a well-paying, full-time job for some hackers and <a href="">breaking encryption is a stated goal of the EARN IT bill’s sponsors</a>. Not to mention, encryption has not stopped the Department of Justice <a href="">from taking action against exploitative sites</a>–which they did by analyzing financial transactions, rather than breaking encryption.</p> Sun, 15 Mar 2020 00:00:00 -0400 Announcing The Thought Trap (book preorder) <p>We have nearly unlimited information, social connections, and entertainment at our fingertips almost everywhere via the Internet. The scale of the services that bring that web to us are mind boggling. Over <a href="">68% of adults in the U.S. are on Facebook</a>. Google alone saw <a href="">at least 5.5 billion daily searches</a> in 2016 and holds over two-thirds of the United States search market. <a href="">Amazon Web Services hosted over 148,000 websites</a> as of 2017.</p> <p>But, how often do we think about those services’ stewardship of the Internet? They curate the content we see, make decisions about who gets to speak on their platforms, and sometimes take down whole websites. And, they’re within their rights to do so – they have the same rights to free speech that we do. While they tell us that our content is important to us and that they take their content curation seriously, we don’t know what goes on behind the scenes. Nonetheless, we expect them to uphold a standard of fairness and neutrality, while protecting us from the worst the Internet has to offer.</p> <p>Things aren’t actually so utopian on today’s web. We’re overwhelmed with information. Our digital lives are tracked and analyzed. Our social feeds show us clickbait, hoaxes, and habit-forming features. New technologies have given us fake photos, Deepfake video, fabricated audio, and even realistic but completely computer-generated speeches from world leaders. They make for propaganda like no other – we can barely trust our eyes and ears when it comes to the contents of our feeds. Even the tools our social networks give us to report those problems are getting used against us. Bias creeps into our personalized recommendations, our social feeds, and even our search results. We’ve seen our cloud giants – Facebook, Google, Twitter, and others – blamed for bias in their moderation and site features on more than one occasion.</p> <p>And, despite our best efforts, we’re not immune. The web is built, intentionally or not, to exploit how our minds work to turn a profit. Mob mentality, confirmation bias, and <a href="">our willingness to re-share without double checking</a> make it easy for us to be manipulated – to trade the commodity that is our attention, to change our opinion, or to sell us something. We don’t have many tools to know who is manipulating us, why they’re doing it, or when it’s happening.</p> <p><em>The Thought Trap</em> examines the dystopian side to our web of unlimited information. How are we getting manipulated? To what end? What’s happening across the threshold of the massive online platforms we’re all part of?</p> <p>Now available for preorder on e-book, launching (alongside a paperback edition) August 7th, 2019. Get more details at</p> Fri, 26 Jul 2019 00:00:00 -0400 Search Neutrality <p>We rely on search engines to navigate the world on a daily basis and in a variety of forms. Google alone saw <a href="">at least 5.5 billion daily searches</a> in 2016 and holds over two-thirds of the United States search market. Whatever it is we search for and on whatever search engine we choose, we expect the results we get to be relatively unbiased, ordered by their relevance to the search. Each search engine chooses and orders its results differently based on algorithms most hide from the public. A search engine, especially one the size of Google, has the power to make sites disappear into relative obscurity by listing them lower or not at all.</p> <p>Although we might expect our search results to be relatively unbiased, there isn’t a guarantee that they will be. In 2014, <a href="">a San Francisco court ruled</a> that the way Google orders its search results is protected by Google’s first amendment rights. In other words, our search results are completely up to the company behind the search and what the company, or its algorithms, decide to censor. And they do drop sites from results; search engines lower the rankings of websites that load slowly, carry malware, or are deemed untrustworthy by their algorithms. They don’t always get it right, however. Google has surfaced conspiracy theories in its instant answers on more than one occasion and manually steps in to correct those results. In some cases, sites have figured out how to game the results to promote a questionable agenda, despite the best efforts of companies like Google.</p> <p>The fact that major search engines can make websites almost disappear from the Internet is as disruptive to a business as it is to the ability to be informed. Aside from well-meaning corrections and exclusions, websites are dropped from search in questionable circumstances. Foundem, a search engine launched by UK entrepreneur Adam Raff was “penalized” by Google, effectively making it disappear from the Internet from 2006 to 2009 from anyone who relies on Google Search. Being penalized by Google means being lowered in rankings or dropped completely which means being found and clicked on less.</p> <p>Google is open about some of the reasons it decides where to rank results in search. In recent years, the search giant has announced that how mobile friendly sites were or whether they were available over secure protocols were factored into where a site showed up in search. That is, of course, in addition to “relevancy” which has been a factor in Google’s algorithms for a long time. Generally, Google search results are high quality and relatively similar from person to person, without a wide degree of interest targeting. However, <a href="">in addition to its issues with Foundem</a>, Google has been involved in multiple lawsuits over low rankings or even blacklisting of certain sites and competitors.</p> <p>The differences in algorithms between search engines are the factor that distinguishes search engines from each other. They’re important to the identity of a search engine and are what make some search engines better for certain searches than others (or, better at finding what we want to find). While there isn’t anything wrong with that, it does mean there is little to no transparency in how search engines choose their results, which is a problem. As search engines get further ingrained into our lives for everything from finding a good restaurant to finding the latest on a political candidate, that lack of transparency may severely harm our ability to be informed. Without knowing what biases are built into a search engine–intentionally or not–we don’t know how much we can trust search results especially when it comes to news and politics.</p> <p>Though trying to compel a search giant to publish its search algorithms is a waste of time and arguably, not terribly worthwhile, more transparency in how it decides the relevance of sites is. It’s hard to know what the motivations, political leanings, or even just software bugs will be in a search engine’s future.</p> <p>In the meantime though, we can choose search engines that are dedicated to unbiased and non-personalized results to make sure our search results are not keeping us in a filter bubble. Unfortunately, even for some search engines that are committed to those principles, such as <a href="">DuckDuckGo</a> and <a href="">StartPage</a> which both highlight their commitment to privacy and neutrality, the details of how they order search results are relatively secret (or, in the case of StartPage, are based on Google). Others, like <a href="">Gigablast</a>, are fully open source so anyone can see their inner workings. That makes it easier to hold them accountable for their search rankings so that we know they’re being reasonably fair and unbiased no matter what we search for.</p> Thu, 06 Sep 2018 00:00:00 -0400 #WalkAway and Russian Ops <p>Bot activity across the political spectrum is nothing new. <a href="">In a University of Southern California (USC) study</a>, bots accounted for a fifth of political activity on Twitter in the 2016 election. Bots also accounted for 400,000 of the 2.8 million Twitter users tweeting about the election, or about 15% of the users the USC study looked at. While bots supported campaigns from all political leanings, research has shown that Trump had and continues to have a significantly higher number of supporters that are bots rather than real people.</p> <p>Social media has been actively fighting the invasion of bots. Takedowns of bot accounts are ongoing and happen at such a magnitude that some sites, such as Twitter, have found the growth of their user base growing much more slowly than expected. The bot scourge appears unlikely to stop anytime soon. Spurred by a possibility of a “blue wave” in the 2018 midterm elections bot activity is ramping up with new campaigns such as the recent “#WalkAway” hashtag movement.</p> <p>“#WalkAway” <a href="">is a viral Twitter hashtag</a> that made it into the top trending topics on Twitter hot on the tail of protests over topics like family separations and other border control practices. The campaign consists of tweets tagged with the #WalkAway hashtag posted by Twitter users claiming to be leaving the Democratic party due to the party’s “intolerance and incivility.” The tweets appear to mainly be from bots and right-wing Twitter influencers rather than real democrats. <a href="">One of the Tweets</a> from a now-suspended account, was retweeted over 16,000 times:</p> <blockquote> <p>“Both my parents are Hispanic LEGAL immigrants, both were registered Democrats, and both this week told me they have decided to #WalkAway”</p> </blockquote> <p>Twitter took the account down over strong suspicions that it was a bot. Other Twitter users reported the user after realizing the account didn’t have a credible history and that its profile picture was a face photoshopped from the cover of a book of penny stocks. The #WalkAway campaign as a whole <a href="">has been linked to Russian Twitter bots</a> and ranked as the third or fourth most popular Russia-linked hashtag for days.</p> <p>The “#WalkAway” hashtag is far from the first hashtag campaign linked to Russian actors appearing to try to influence the narrative of issues in the United States. Russian bots were linked to the spread of disinformation on social media through the 2016 election. After the Charlottesville riots, Russian bots took up various right-wing conspiracy theories and rallying cries, <a href="">pushing them viral on the platform</a>. Some accounts, including one that went by the name Angee Dixson, posted over 90 times a day to criticize the removal of Confederate monuments and to post pictures that supposedly showed left-wing violence. Others posed as various “Antifa” accounts, pretending to be violent far-left activists.</p> <p>When it comes to bots, any sort of accuracy and honesty are far from guaranteed. Bots are operated by relatively anonymous groups whose goals aren’t known and who don’t appear to prioritize accuracy or honesty. Despite that, they’ve successfully inserted themselves into political discourse around the world, making it hard to trust what we read and spreading misinformation to those who may be too trusting. Whether it’s from Russia, as the current allegations suggest, or a different country, social media discourse can be hard to trust.</p> Thu, 12 Jul 2018 00:00:00 -0400 Bet You Won't Click Right Here <p>It’s no secret that advertising revenue drives the Internet. Online advertising <a href="">has surpassed 200 billion dollars</a> and reaches nearly every corner of the Internet. Online ads make up the majority of the revenue of Internet giants including Facebook and Google and pay for many other free-to-use services and <a href="">account for an estimated 25% to 40%</a> of Internet traffic on some networks, in one study. To convince us to click, ads have evolved to be increasingly intrusive and targeted. Some sites even run ads that look like normal news articles.</p> <p>The actual ads are only part of the story. Without people to see them and, sites hope, to click on them, ads don’t make money. As soon as someone leaves or decides not to visit in the first place, a site is losing money. This is part of the reason that sites add features to keep us coming back. Notifications, “likes,” infinite scrolling, and other features that a former Facebook engineer <a href="">describes as</a> “bright dings of pseudo-pleasure” have an addictive effect so that we won’t stay away. Social media sites have, in some ways, managed to convince us that <a href="">they’re a cure for boredom</a>. Each time we’re bored we want to be entertained, so we hop back on our social network of choice to avoid being bored.</p> <p>These types of tactics don’t work to get people to visit a site in the first place though. Somehow, addictive features or not, a site needs to convince you to visit. Thus, “clickbait” headlines were invented. Clickbait titles usually try to provide just enough information to make us curious but not enough to satisfy the curiosity, so we’ll click on the link. There’s no guarantee that the article we find ourselves on satisfies the curiosity. Former Daily Show host Jon Stewart <a href="">describes clickbait titles</a> as carnival barkers in his Internet use:</p> <table> <tbody> <tr> <td>“I scroll around, but when I look at the internet, I feel the same as when I’m walking through Coney Island. It’s like carnival barkers, and they all sit out there and go, “Come on in here and see a three-legged man!” So you walk in and it’s a guy with a crutch.”</td> </tr> </tbody> </table> <p>Clickbait <a href="">has an effect</a> whether we actually click the article and whether we recognize it for what it is. While plenty of clickbait articles are generally harmless time wasters, the same tactics are being adopted by news outlets hoping to gain more readers and again, more ad revenue. Clickbait from news sites tends to take the form of sensationalized headlines that the site hopes will get an emotional response because emotionally charged clickbait is easier for people to fall for. This is not, as some might like to believe, isolated to a particular political leaning. Fake news and sites that provide no true reporting that rely on clickbait-style headlines exist across the political spectrum. Unfortunately, studies show that <a href="">only about 4 in 10</a> people actually read articles beyond the headlines.</p> <p>We compound the problem of clickbait by sharing clickbait headlines with our friends. Clickbait titles are easy to share (or re-share) and can spread through social media like wildfire. Social media shares are highly important in determining which headlines spread and which vanish into relative obscurity. Sites have caught on to this and expanded their tactics to include “sharebait” which are headlines designed to go viral on social media. The tactic works even if the articles contain nothing of substance (or even nothing true) because studies show that people are willing to share articles they haven’t read if they react to the headlines. On Facebook, <a href="">six in ten people</a> were willing to re-share an article without actually reading it.</p> <p>Habitually sharing without reading isn’t a small-scale problem and may have very real effects on the political landscape. In the last three months of the 2016 presidential election, sharing of fake news on Facebook, most of which was pro-Trump or anti-Clinton, took the network by storm and <a href="">overtook shares of factual news</a> by about 1.4 million shares. While on a social media scale this is fairly small and unlikely to influence election outcomes—Facebook also says that shares don’t indicate overall engagement with the articles—it highlights how effective clickbait and sharebait techniques are, and suggests that they could become a major problem.</p> <p>All of this is part of the game for online media. The more shares, clicks, and views they gather, the more ad revenue comes in and the more valuable ad space on their site becomes. Whatever attracts you—an ad, a clickbait title, or something else—websites hope that you’ll see it, share it, and maybe read the first few sentences in order to show some ads. We’re driving these tactics because they work even when we’re aware of them, and only a minority of people actually take the time to read and dig past the headline.</p> Thu, 21 Jun 2018 00:00:00 -0400 Social Media Empowers Democracy and Oppression <p>If you listen to the ways social media sites describe themselves, it’s usually a description of connecting and empowering people. If you haven’t recently read the mission statements of social media sites, Twitter’s <a href="">published mission statement</a> focuses on “giv[ing] everyone the power to create and share ideas and information instantly without barriers” and <a href="">Facebook’s</a> is “to give people the power to build community and bring the world closer together.” While those statements aren’t false, at least at face value, operating at a global scale means that the reality isn’t quite as romantic.</p> <p>Social media has indeed empowered all kinds of movements. Most users of large social networks connect with others freely without a lot of thought to the network behind them, or its rules. In some ways, social media may be a more approachable version of the largely uncensored IRC chats and other sites from earlier days of the Internet. One of the larger recent movements, the Women’s March, which roughly 1% to 1.6% (3 million to 5 million) of the United States <a href="">participated in</a>, <a href="">saw its start on Facebook</a>. Social media is undoubtedly powerful and provides platforms that connect everyone to the world.</p> <p>However, we now know that well-intentioned humans are not the only users of the networks. Bot armies, available for rent, also inhabit social media to spread all sorts of ideas. They call out racists, they spread information true and false, and they spread confusion. It’s tough for us to tell when users are bots or who really is behind a post. This makes social networks a great way to spread propaganda, which social media sites aren’t always good at removing.</p> <p>Bots and blatant propaganda aren’t the biggest information problem that social sites face. Due to the fact that larger social networks operate globally, the companies behind them need to comply with local laws, wherever and whatever those laws may be. Though much of the western world’s users enjoy a high degree of free speech (though not entirely on social media), other places don’t have the privilege of using social media to connect freely. Just last year, in 2017, a computer engineer in Vietnam <a href="">had his house stormed by police</a> due to a poem he published on Facebook criticizing how the country was run. The <a href="">Harvard Law Review notes</a> that even in countries with strong freedom of speech protections, such as the U.S., social websites likely have the right to censor our speech as part of their own free speech rights.</p> <p>When social media and governments choose to collaborate to oppress an opinion, as we saw with the Vietnam arrest, users can be powerless to know or fight their efforts. Digital content is the only content that can be completely banned so that nobody can see it. We don’t really know what the internal processes are for censoring content is on social media and it’s hard–if not completely impossible–for us to know that it’s happening. With the case of the computer engineer in Vietnam, the arrest came weeks after Facebook <a href="">committed to working with Vietnam’s government</a> to prevent content that violated local laws from appearing on the platform. Not only that, but the arrest is not the first and likely not the last, as <a href="">more than 50 countries</a> have passed laws over the last five years aimed at controlling how their citizens use the Internet.</p> <p>It doesn’t stop at intentional collaboration between a social site and the government. Independently, social media is being used by governments, with both good and bad motivations. In addition to the obvious; social media being used to reach out to supporters and shape opinion, the Internet <a href="">provides additional ways</a> to keep an eye on constituents, regardless of motivation. Social platforms provide an easy way for ruling parties to keep tabs on the private opinions of their citizens. Russian opposition leader Alexei Navalny <a href="">described</a> the Internet used this way as a “focus group” by the Putin regime. Private opinions expressed on social media can also help reveal how effective local officials are since there isn’t always a direct channel for higher officials to be aware of local politics.</p> <p>While the utopian idea that all it takes to empower people is to provide open access to the Internet is appealing, we need to be aware that it’s not realistic. While the general idea is true, actually providing that open access is much more difficult in reality. Some governments want their piece of the action and block or censor sites that don’t comply with their rules. Net neutrality and Internet Freedom aim to put in place rules to ensure that Internet Service Providers as well as social websites, respectively, don’t close off content.</p> Wed, 30 May 2018 00:00:00 -0400 How To Manipulate Elections In The 21st Century <p>Propaganda isn’t a new idea, but we’ve been witnessing its evolution into a 21st century version. Through history, propaganda wasn’t always true but it was usually believable. The word propaganda <a href="">came into normal use around 1914</a> but the act it describes is far from new. The ancient Greeks used games, theatre, courts, and religious festivals to spread their ideas, the mass media of the time. Over time, the medium for propaganda evolved to the modern media at the time. Today’s propagandist has a lot more choices for outlets, including the Internet and social media which can spread ideas faster than ever, with opaque fake identities that are hard to catch.</p> <p>Although spreading stories to push a belief isn’t anything new, making them believable isn’t important anymore. Starting the spread of a false story or a series of false stories, is extremely easy online. Armies of fake social media accounts can be rented for very little. The idea is not to push a specific idea, <a href="">but to make it harder to trust anything</a>. It’s a scorched-earth sort of approach where the intent is to create doubt, break faith in institutions that should otherwise be trustworthy, and manipulate politics with the resulting paranoia. Even publishing completely contradictory information - whether any of it is true - is effective because it can spread like wildfire when shared enough times on social media. People lose track of what’s true and refuse to believe anything at all, and start to question everything. A firsthand account from Elizabeth Flock at PBS is <a href="">an interesting read</a>.</p> <p>The magnitude of foreign-backed—primarily Russian—online misinformation campaigns have been widespread and growing. All the major social networks have reported they’ve removed hundreds, or far more, of accounts found to be part of misinformation “bot” networks. Facebook launched a tool that showed users if they had followed any Russia-backed information campaigns. The campaigns aren’t limited in effect to the Internet, though they spread quickly through social networks. Russia-backed groups have launched protests and counter protests in real life, everything from <a href="">LGBTQ+ rallies</a> to 2nd Amendment rallies. In summer 2017, two different <a href="">Russia-backed pages</a> organized dueling rallies in Texas in the same location. The format of the campaigns have included Facebook pages and purchased ads as well as leaking emails and other things.</p> <p>It’s easy to hope that you would never be duped by a misinformation campaign or a foreign power trying to manipulate opinion, but even bona fide activists have been fooled into helping with rallies. We don’t really know how much recent events were affected. Congress <a href="">recently released</a> some 3,000 Facebook ads that were paid for by Russian groups. <a href="">Twitter said</a> it removed more than 50,000 Russian bot accounts from the network in January 2018. Many of the campaigns, such as some “Antifa” accounts appeared to be designed to make people angry, for example by posting things about defacing an opposing group’s materials. <a href="">Others</a>, such as a “Blacktivist” group tried to create rallies in the wake of tense events, and still others advocated for the secession of Texas.</p> <p>The issue of Russian bots and misinformation is more than an online conspiracy theory. Research has been slowly tracking down bot accounts and has discovered human and bot accounts on Twitter <a href="">act quite a bit different</a>. Sometimes, it’s obvious even to the untrained eye. Bots in the same campaign sometimes share the same message just seconds apart, and in alphabetical order (based on the account usernames). In some instances pointed out <a href="">by Twitter users</a> the Russian owner of the account forgot to turn off location services, which tag the actual location a tweet was sent from (note that this can be spoofed). In one case, the bot networks <a href="">started tweeting</a> in defense of a Russian action before it actually happened.</p> <p>Part of the problem—likely the more easily reparable part—is due to social networks having little incentive to address misinformation campaigns and bots. The social media space is generally unregulated and topics that rile up a group can attract more attention so social media can run more ads. However, as bots get better at looking human (and they generally are run by real people), it gets harder to catch them. Social media and the Internet at large need to get better at making us aware of coordinated campaigns and dealing with the accounts behind them. To their credit, some social media sites appear to be starting to take the issue seriously. In 2017, Facebook announced it <a href="">planned to hire</a> 1,000 more people to review ads. More recently, Facebook <a href="">stopped accepting foreign-funded ads</a> about an Ireland abortion vote.</p> <p>However, regardless of the actions that social media sites take, either on their own or due to government regulations, we need to be more diligent about vetting what we see online. Russia’s <a href="">general foreign objective</a> isn’t to forward a specific idea or interest, it’s to weaken anyone they see as adversaries. It’s possible to contain the effects, <a href="">as France did</a> during a 2017 election, by being aware of attempts to manipulate and having rules in place to control them. At the very least, we need to remember that social media is not a reliable news source, especially because it can mirror our own beliefs back to us instead of giving us credible information.</p> Thu, 10 May 2018 00:00:00 -0400 Sorry, Reality Violates Our Community Guidelines <p>Censoring content—whatever the reason, be it for keeping adult content out of the hands of minors, because of manufactured outrage, or any other reason—is an interesting dilemma when it comes to digital content. Parental controls aren’t exactly new or complicated, and online services are increasingly adding “kid-friendly” restricted versions of their services. For the most part, those restrictions aren’t anything too novel as far as gated content is concerned. However, digital bans are more enforceable and certain formats of content can be modified on the fly with surgical precision. We might not always know it’s happening or why since sometimes the only reason for a takedown is a violation of “community guidelines.”</p> <p>China is one of the most cited examples of digital censorship (though there are other countries with similar policies including Turkey). China’s Internet is controlled by the state and exists within what’s referred to colloquially as “The Great Firewall of China.” The government restricts access to all manner of online content and services, sometimes blocking entire sites and other times only blocking specific pages. For the layman there isn’t any way around the content blocking and for those who know how to circumvent the blocks, doing so is illegal. By controlling what parts of the Internet are available, China is able to almost completely ban content from being seen within the country. This is different from, for example, banning books because book bans cannot round up every copy of a book. China, however, can instantly block information from the entire country in totality.</p> <p>It isn’t only governments that censor content. Online services do so for the sake of interest targeting and for PR reasons. Facebook, which posts case studies of running advertisements on its service, <a href="">quietly hid pages</a> that suggested it might have the ability to influence elections. When asked if they could have influenced the 2016 election outcome, Facebook of course denied. To back up their point, an entire section for “Government and Politics” case studies disappeared from Facebook’s success stories list. To be fair, it’s within the right of any website to take down pages or links to pages on a whim though Facebook’s timing may have been unfortunate.</p> <p>Making things disappear is only one way to censor digital content. Some media formats, such as ebooks, are fairly easy to modify—permanently or “on the fly.” Censored content in video and audio is relatively obvious; blanked (or “bleeped”) vocals or blurred images, and we’re all accustomed to it. An e-reading app called “Clean Reader” <a href="">tried to extend the same feature</a> to ebooks a few years ago. Clean Reader, without permission from authors, allowed readers to choose how “clean” they wanted a text to be and the app would blank out and replace words that were offensive according to the user’s settings (though, the creators make very clear, the app did not sell modified books).</p> <p>Clean Reader’s approach is simplistic, a sort of automated find-and-replace for things it deems inappropriate. There are much more complex systems at work trying to keep the Internet clean which do far more than find-and-replace. YouTube has software that scans every video uploaded for copyright and content problems. <a href="">Between October and December 2017</a>, YouTube’s algorithms flagged 6.7 million videos for review and of those, 76% were removed before anyone watched them. The service <a href="">has removed everything from</a> “Tide Pod Challenges” to adult content to videos of violent extremism. Video removals aren’t all good news though. While Google publishes community guidelines, what types of videos Google decides aren’t suitable for YouTube is decided by YouTube with no real oversight, aside from community outrage which the company occasionally ignores.</p> <p>Though censoring content in some ways may keep the Internet clean, it also might be hiding the realities from the world from us, thereby hurting our ability to be informed. In 2017, <a href="">YouTube faced outcry</a> for accidentally censoring videos showing atrocities in Syria. While the videos were graphic, they’re the work of organizations trying to document human rights violations that are happening. In some ways, we’re losing our ability to choose what we see and it’s limiting how informed we are. As big tech companies increasingly become distributors of news, we need a change to our culture and oversight of keeping the Internet clean. The realities of the world don’t match with the community guidelines of various online services and our filter bubble will get harder to escape when reality is actively censored instead of simply not being “recommended” to us.</p> Wed, 25 Apr 2018 00:00:00 -0400 We're Addicted To Social Media <p>It’s not just you; leaving social media is hard. Social sites appear to be genuinely addictive and human psychology has a difficult time resisting them, especially when everyone around us is using them. The effects may even be getting stronger as we find social sites part of our daily lives, carried in our pockets and increasingly, worn on our wrists.</p> <p>The idea that a service could be addictive isn’t exactly new. CNN <a href="">posted about Facebook addiction in 2009</a> and there are articles that are even older. Addiction and social media is hitting to the mainstream discussion lately as questions about the true effects of social media continue to mount after the 2016 election.</p> <p>Addictive effects of social media have been studied with varying degrees of scientific rigor. In data gathered from an experiment called 99 days of freedom, which encourages people to stop using Facebook for 99 days, <a href="">many people had trouble quitting the site</a>. Reasons for returning to the site seem to lean heavily towards a fear of missing out, which is hard to overcome when practically everyone is on Facebook. Another factor is the notification count at the top of the page. We’re drawn to clickbait headlines with content like “You won’t believe…” and it’s thought that the notification count has a similar effect, sort of like a headline like “You won’t believe what x people said about you!”</p> <p>There’s even a book about the psychology behind building addiction to services. Nir Eyal, author of <em>Hooked: How to Build Habit-Forming Products</em> talks about how to turn things like checking social media into a habit. He described in <a href="">an interview with Business Insider</a> the idea that Facebook has been so successful—and addicting—because the service has managed to get itself seen as a cure for boredom. With it available everywhere, anytime we’re bored we might decide to turn to Facebook in order to avoid being bored. He mentions the Facebook news feed in terms of casino games.</p> <p>Many social media sites have mission statements that paint a picture of services helping us connect to the people we care about. In their early days, the services may have been good at that, but with constant targeting and manipulation to make us spend more time (and view more ads) they’re more addictive than ever and less effective at connecting us. Engineers from those companies have started to raise the alarm of the addictive and harmful effects of social media. Even Eyal is taking a similar position and <a href="">surprised attendees to a recent conference</a> by introducing ways to resist the pull of social media’s tricks to keep us coming back.</p> <p>One of those people is the creator of the Facebook “like” button, Rosenstein. <a href="">Rosenstein describes</a> the feature he created as “bright dings of pseudo pleasure.” He no longer uses Facebook and is part of a group of people with similar backgrounds who are building a convincing story to counter the PR of social media services. Among other things, the group believes the social media we know today has a negative effect on the political system. They suggest that social media, if left unchecked, could—and maybe already has—upend democracy. They’re serious enough about it that they don’t use the popular products of Silicon Valley and send their kids to schools that ban devices like laptops and iPads.</p> <p>The addictive nature of various online services isn’t a problem with the Internet in general. Parts of the addictive features are most likely intentional features designed to keep us on a website to show us more ads. The ad-based model of providing free services is partly to blame for this because it incentivizes sites to be as addictive and sticky as possible. Without accountability for what gets shown in the interest of keeping users on a site, this poses a clear problem.</p> <p>Although it’s not a new problem, it has gotten worse since social media is now an effective place to spread propaganda and misinformation. Social media giants appear to be coming to terms with the problems of their ad-based addiction models as misinformation and allegations of fake news turn up on a daily basis. Zuckerberg himself <a href="">has admitted</a> that “we didn’t take a broad enough view of our responsibility.” One thing is clear; whatever apologies and solutions social media sites offer, it’s past time to hold them accountable for their effect on us and on our ability to be informed.</p> Sun, 15 Apr 2018 00:00:00 -0400 The Web And Our Attention <p>The Internet has a weird influence on our attention. We decide the value of a web page, and whether it’s worth reading, in seconds. Ten to twenty seconds is the average amount of time people spend on a web page (assuming it loaded reasonably quickly; waiting for a page to load drives people away). We look in a very particular pattern when we visit a page to decide what it’s about and if it’s valuable to us. Studies have figured out how we behave and sites (and advertisers) are learning to tailor pages and ads to attract our attention and hold onto us for longer.</p> <p>Not all of this is bad. Sites can use the research to improve how people actually read their content which makes our experience better. This saves us time and effort when browsing the web because we’re able to visually scan a page and, if the site is designed well, get the gist of it <a href="">without reading the whole thing</a> if we’re not interested or if we’re in a hurry. That’s important, since (as of 2013) about 38% of people never actually read or interact with a page and of those who do, some won’t scroll down. The longer an article is, the less likely people are to read the whole thing before leaving. Sites now know that in order to keep people’s attention, they need to get it fast and if they want to make a point, it needs to be done early on. Interestingly, the more literate someone is, the more likely they are to scan a page <a href="">without reading it in full</a>.</p> <p>If every site was honest, that might not be a bad thing. However, not every site is. Many sites are tailored to keep our attention as long as possible in order to show us more ads or gather more information about us that they can sell (or use to show relevant ads). Others might take advantage of how people look at web pages to hide information in plain sight to lead those who don’t read the full article to an intentionally wrong conclusion. As an example, text centered or <a href="">aligned right</a> is often completely ignored or not seen (in left-to-right languages), and only <a href="">20% of people’s attention</a> goes to anything that they had to scroll down to see (and of that, even less attention is paid to things aligned to the right side). Putting a retraction or information important to drawing the correct conclusion in those areas may be an effective way of hiding it.</p> <p>Different site designs impact our attention in different ways. As such, it’s possible to manipulate a site’s design for a specific goal. In 2012, when Facebook introduced cover photos to pages and profiles, <a href="">what we paid attention to on Facebook changed</a>. Cover photos were looked at 100% of the time (and for the longest amount of time) while how much time people spent looking at posts from the person or brand dropped. Design changes for Google’s search results are more interesting (and, arguably, more important). In a small study (53 people), <a href="">changes in Google’s search page</a> design changed people’s focus from the top left of the page to a more even spread and dropped the time it took them to find a search result by a little less than half.</p> <p>When it comes to search results, which many people online use multiple times a day, how and where our attention is focused on a page matters. Since the focus on a Google result page is no longer the first result, being the first result for a search doesn’t matter as much as it used to so sites may try to game their search ranking a little less. For other sites, optimizing to make us to see more of what they want us to and less of what they don’t is a big, profitable goal.</p> <p>Capturing our attention and profiting from it is nothing new and companies have been doing it far longer than the Internet has existed. Using tricks to manipulate what we look at, especially when it comes to advertising or keeping our attention longer is increasingly common. Facebook, in particular, has basically admitted to being as <a href="">addictive as it can be</a>. The more addictive and attention grabbing a service is, the more it can show and the more data it can gather about the people looking at it. We’ve reached the point where the amount of data gathered is dangerous, with issues like the Cambridge Analytica scandal.</p> <p>Data gathering aside—though important, it’s a side issue to this discussion—keeping our attention and curating what we see is a danger to our staying informed. Human psychology and behaviour when it comes to the Internet is an active field of study and its findings are being put to use. Social networks have no requirement to be neutral and the algorithms that power what they show <a href="">have the biases of their creators</a> built in. While the Internet gets better at holding our attention it also gets better at manipulating it in ways we don’t realize to modify our views and isolate us from things that matter—in order to hold our attention for longer.</p> Thu, 05 Apr 2018 00:00:00 -0400 Your Digital Stuff May Not Be Yours <p>On a Friday in 2009, some people discovered that copies of Orwell’s <em>1984</em> and <em>Animal Farm</em> had <a href="">mysteriously vanished</a> from their Amazon Kindle e-readers. While this wasn’t the only case of mysteriously vanishing books (copies of <em>Harry Potter</em> and Ayn Rand novels vanished at other times), it was interesting enough to hit the news because <em>1984</em> was involved. The disappearing books was not, of course, accidental. Amazon had intentionally taken down the books and deleted downloaded copies from Kindles due to copyright disputes. It sparked an interesting discussion about ownership of digital goods; depending on how you’ve purchased them, maybe you don’t fully own them.</p> <p>Amazon acknowledged that perhaps its choice to delete the books from the devices of customers who had bought them was a bad one and promised it would change its policies. But, in 2012 Amazon <a href="">reignited the discussion</a> by deleting the Amazon account of a Norwegian IT consultant and clearing all the books from her Kindle. Though on the much smaller scale of just one Kindle, Amazon again was subject to indignation from its e-reader customers.</p> <p>While the idea that a company you’ve bought digital things from could take those things you’ve paid for away isn’t pleasant, <a href="">companies are allowed</a> to do that by the terms of service you agree to when you make an account. Amazon, Barnes and Noble, Apple, and other services have similar agreements. None of their agreements state that you actually own any of the digital content you buy; you’re not buying the content, you’re paying for access that the company can revoke. In fact, for many e-books, even if you take the e-book file off the device you bought it on, it may be difficult to impossible to access it anywhere else because it’s protected by something known as DRM (short for, depending on who you ask, “Digital Rights Management” or “Digital Restrictions Management”).</p> <p>It’s fitting that the discussion of digital ownership ended up in the mainstream because of <em>1984</em> and books. However, the same issue extends to other digital content. Google <a href="">has the ability</a> to remotely disable and delete apps from Android phones, and periodically does when it discovers apps that contain malware. Apple <a href="">has the same ability</a> for its iPhones and iPads. Although removing malicious apps is a benevolent use of the ability, the ability is there nonetheless.</p> <p>The ability to remotely take away access to certain things hasn’t been used maliciously or for general censorship as far as we’re aware. But, it raises some concerns, in addition to digital ownership rights, now that net neutrality and the responsibilities of online services being neutral are growing problems. We recently learned that Facebook, in addition to its various other recent controversies, <a href="">quietly took down case studies</a> about its success in getting various politicians elected. To be fair, Facebook’s revenue is based mostly off ad revenue and it’s not uncommon for politicians and political groups to buy online ads.</p> <p>While we haven’t and likely won’t see something on the scale of book banning in Orwell’s <em>1984</em>, books are still banned (you can see a list of books banned in 2016 in the U.S. <a href="">here</a>). Banning a book, for any reason, electronically and removing copies from devices people own is the only way that a book ban can actually be enforced; physical books can’t be recalled in the same way and places such as libraries can refuse to take a book out of circulation. It sometimes gets worse than content: <a href="">courts have tried</a> to force companies like Dish to remotely disable devices installed in people’s homes and AOL to uninstall software from people’s computers with “updates” because of patent infringements.</p> <p>What makes this scarier is that companies have the ability to censor content for almost any reason. Amazon was known to selectively ban books for <a href="">vague reasons</a>, such as certain pornographic content. Amazon has caved to public outcry on certain banned content, but there’s no guarantee they would for something more important, if Amazon felt it was against their values. It’s worrisome when we know that foreign actors are working to interfere with elections, potentially with the help of online services (intentionally or not). Though anyone can publish an e-book, it’s up to the companies making it available whether it gets a wide platform rather than a download page on an obscure site. Worse still, is that if a court can order Dish to disable almost 200,000 devices installed in homes, a court <a href="">might also be able to order</a> the banning of certain content and companies like Amazon have the technical ability to comply.</p> Sun, 25 Mar 2018 00:00:00 -0400 Big Cloud, Big Leverage <p>Net neutrality is extremely important in protecting the Internet as it exists today, but it may not go far enough. The guidelines recently struck down by the FCC that preserved net neutrality only apply to Internet Service Providers, the companies that provide access to the Internet. While we need regulatory protection from ISPs due to a lack of competition, we may be forgetting the other side of the cloud; the services we access via our ISPs. Owning a big, widely used cloud means online services have their own leverage over ISP business and customer experiences.</p> <p>Neutrality as it pertains to online services is distinct from net neutrality, though related. It’s possible for an online service to violate the goals of net neutrality without violating net neutrality. Online services are able to see what ISP their customers come from, assuming they’re not using a VPN service, TOR, or something similar to mask their origin. Using that information, a service could treat visitors on a particular ISP (or in a particular region, or on a particular device) differently. Services on the scale of Facebook or Netflix could use their massive user-base as leverage to get what they wanted from an ISP.</p> <p>A website could change the experience for users of a particular ISP in any number of ways, including throttling or degrading its service. While it might seem odd that a service might intentionally make its site work poorly, that has happened. In 2016, Netflix was caught throttling traffic and degrading video quality for customers accessing its site on mobile AT&amp;T and Verizon, but not for customers using Sprint or T-Mobile. It’s not clear what Netflix was trying to accomplish by throttling its own service on two of the four major U.S. mobile networks, though the company was careful to avoid using the word “throttling” in its explanation. Netflix claimed that its throttling was an effort to take better care of its customers by <a href="">helping them stream more</a> without using up their data plans.</p> <p>The fact that net neutrality only cuts in one direction isn’t necessarily an accident. When Google first explained the goals of net neutrality in the FCC’s broadband Internet proceedings, the company explained that neutrality rules were only needed for ISPs. According to Google, only Internet Service Providers, not online services, <a href="">had the ability and the incentive</a> to manipulate Internet traffic. The idea that websites wouldn’t have similar incentives and abilities was likely untrue then, and isn’t true now as Netflix proved with their throttling.</p> <p>While concern that ISPs are becoming less competitive and more likely to violate neutrality principles as they get larger is valid, we need to take into account the fact that online giants have the same abilities and some of the same problems. Unfortunately, the current net neutrality tends to focus only on ISPs, though not necessarily unfairly as neutrality for Internet providers is a more pressing and active issue.</p> <p>With or without net neutrality guidelines that regulate ISP practices, online giants can legally violate net neutrality goals. To what end, it’s hard to tell. Netflix claimed it throttled its services on AT&amp;T and Verizon in order to benefit its customers, but could have used the same practice to push customers who frequently streamed Netflix away from AT&amp;T or Verizon. This can matter on a massive scale when services like Facebook, which boasts <a href="">1.4 billion active daily users</a> (and over 2 billion active monthly users), want to make a deal with an Internet provider. Services could choose to be faster on the ISP that paid the most for their higher speeds, leaving anyone on other ISPs in a different kind of Internet slow lane.</p> <p>Internet giants may not have chosen to exploit this yet, but Netflix’s statements on its 2016 throttling is somewhat telling. The company <a href="">told the Journal</a> that “historically those two companies [that Netflix wasn’t throttling] have had more consumer-friendly policies.” From a pure net neutrality standpoint, it’s tempting to applaud Netflix’s actions (especially in light of that statement) because Verizon and AT&amp;T have been against the net neutrality movement. However, forcing a different experience on customers based on their ISP, when most people are limited to few providers, or trying to leverage a deal out of an ISP, is problematic and violates the goals of net neutrality.</p> Wed, 07 Mar 2018 00:00:00 -0500 Social Media is Eating the Web <p>The social web of Facebook, Twitter, Google, and other big social media sites is slowly eating the rest of the Internet. We know that social media only makes up part of the Internet, albeit a large and growing part, and that there are online destinations outside their networks. But, a lot of online destinations that aren’t part of the social web are still integrated with it, bringing social features and its associated tracking along for the ride. To put it bluntly, it’s getting harder to escape social media, and especially Facebook.</p> <p>As we continue to get more connected across the Internet and our media habits are more integrated with the various social networks we’re part of, more sites now rely on social media to get visitors. It’s a good bet; more than half of people online are on Facebook and the <a href="">average American</a> spends over 40 minutes a day on it. <a href="">Vox reportedly gets</a> 40% of its visitors from Facebook and <a href="">other sites</a> might have even higher percentages. This means that Facebook is a major platform for providing people with links to what they see and changes to their feed could influence the habits of more than half of people online.</p> <p>Though that’s probably not unexpected, Facebook’s reach is surprisingly wide and its position as a jumping-off point isn’t where things end. Sites that sport the Facebook “like,” “share,” or other buttons on their pages are allowing Facebook <a href="">to track you</a> across them, even if you’re not logged into Facebook. <a href="">Six percent</a> of the top 10,000 sites (in terms of traffic) load that code from Facebook’s servers. For an average website, 16% of the size of the JavaScript code loaded on pages (used for everything from interactivity to tracking) is from Facebook’s code and can make pages load slower. In turn, the data collected by that code drives what Facebook suggests and shows to you. Facebook <a href="">appears</a> to have even patented their way of gathering data about you this way.</p> <p>Facebook is not the only social site that does this. Twitter actually <a href="">announced</a> they would start doing the same thing across websites that use Twitter code to show a “Tweet this” or “Follow me” button. Similar to Facebook, Twitter explains it uses this data to make more relevant suggestions to you, which means more tailored ads and suggestions of whom to follow, among other things. Other social media sites provide similar buttons that may have the ability to track you as well.</p> <p>Seeing suggestions and social feeds that are more relevant to our interests isn’t, in and of itself, a problem. The problem is that sites are getting better at showing us ads that can change our views, and are actually <a href="">more effective</a> at doing so if we think they’re targeted to us. It makes our filter bubble, our increasingly isolated reflection of ourselves from the Internet, even more isolating. And, it may <a href="">cost us more money</a> as sites get better at showing us ads for things we don’t need, or if sites raise their prices just for us if they know we’re willing to pay more.</p> <p>If nothing else, the data collected by these sites is valuable. Facebook accounts for a quarter of online ad revenue and more than a third on mobile. That’s not by accident; it’s by selling access to the data collected from across the Internet, which they make more valuable by collecting more of. It’s also risky. If Facebook, or any other social media site were to be breached or comply with a demand for information from a government, this highly personalized data could end up nearly anywhere, or used for nearly any purpose.</p> <p>It also appears to be getting worse. Not only are social media sites being integrated into other websites, the reverse is true as well. The New York Times, BuzzFeed, and several other media companies in 2015 that they would start publishing content directly to Facebook. Not only will we be tracked and have secret algorithms suggest news to us, but what we see as well as what’s available to the open Internet, is subject to the practices of the social network it’s on. We’ve already seen Facebook start, and then kill an initiative to <a href="">expand the reach</a> of news articles. The obvious end goal for Facebook is showing you more ads and tracking you better because you never need to leave, but a side effect could be more ability to manipulate what you think and what the media reports on.</p> Sun, 25 Feb 2018 00:00:00 -0500 The Bots Have Arrived <p>It’s hard to know exactly who, or what, an online persona actually is. We’re relatively sure that the people we know in real life are who they claim to be online (though their online life is likely nicer than their real one), but as for anyone else, it’s anyone’s guess. The fact that some larger social media sites, Facebook, Twitter, and Instagram in particular, are also home to the persona of brands further blurs the line of what’s real and what’s not. Then, there are the bots.</p> <p>Social media bots come in a variety of flavors. At their simplest, they’re helpful and provide things like unit conversions, periodic market prices, or other non-conversational information. At their more complex, they stand in for brands or people, tweeting about current news and even trying to talk to other users. With the more complex bots, it can be hard, if not impossible, to tell a bot from a real person. Due to that difficulty and the ability to manipulate online conversation by influencing trending topics (which is possible to do with enough bots), we’re entering a danger zone of manufactured conversation and artificially influenced views online.</p> <p>It’s possible to buy access to bots and there is suspicion that some brands do that to inflate their social media influence. Not only that, but there’s an entire online industry devoted to fake or bot accounts that follow, “like”, and comment on content to boost its social media presence. A Forbes contributor <a href="">describes the effect as “Bot Rot.”</a> We don’t reliably know how many social media users are real, though some platforms are more effected than others. In summer 2017, a security researcher published a study indicating that <a href="">millions of Instagram users are actually bots</a>. Twitter has <a href="">shut down millions of bot accounts</a> over the past year and it’s suspected that <a href="">at least half of Trump’s followers are bots</a>.</p> <p>The bot problem has implications for propaganda as bots have gotten more complex and bigger actors have learned to use them. In 2011, the <a href="">United States Central Command awarded a contract</a> for an “online persona management service” to a firm in California which included fake online profiles—effectively a bot army—so we know that even state actors have taken an interest.</p> <p>While it may seem a bit far-fetched that bot armies could be manipulating online conversation, it appears that it’s already happening and in alarming ways. In 2015, <a href="">over 75,000 online bots</a> were used to fight protests and critics of the Mexican government. Those bots appeared online in 2012 and were used to spam hashtags being used to document human rights abuses, among other things. Recently, an army of bots that spread fake news on a wide scale was discovered to have been <a href="">operating during the 2016 election</a> (and there are now online tools to see if you interacted with any). Just this year, in 2018, <a href="">it was reported</a> that bot armies were being used to beat down dissent against the Saudi Arabian government.</p> <p>Although it’s hard to believe that we could be influenced by armies of bots, we may be more impressionable than we think - and it’s really inexpensive to buy a bot army. The Daily Beast, a news outlet, <a href="">bought access</a> to a Russian bot army of 1,000 accounts for just $45 and found they could buy software to control it for $250. Bot armies are advertised for anywhere from the $45 The Daily Beast spent, to more for accounts that have existed longer or otherwise seem to be more legitimate. It takes far less than that to influence the conversation. <a href="">MIT found</a> that a single upvote on a story improves the response to it by 25% and an early downvote can make it be seen as a bad article. Facebook even <a href="">experimented with manipulating</a> its user’s moods by changing what words were seen in their feeds.</p> <p>Bots are not all bad. Armies of bots patrol Wikipedia, Reddit, and other sites, blocking malicious edits, moderating hate speech, and answering questions. But, armies of bots are also working to influence what we talk about and what we see online by poisoning hashtags, discussing fake news amongst themselves, and by voting and commenting on content. For the untrained eye, it can be very hard to tell what’s real.</p> Wed, 14 Feb 2018 00:00:00 -0500 The Unraveling Social Ecosystem <p>Social media promises to connect people across distances big and small, no matter where people are physically located. Social networks do provide a platform for that, but only if you become a member of the network that connects the people you want to connect with. If you use Facebook, while a friend uses Twitter, that’s a no-go unless you both use the opposing platform or look at things without interacting with them. It’s worse when it comes to interest groups that operate primarily on a particular social network. Without using the platform, it’s impossible to keep up with and to participate with the group. Many social platforms aren’t as open as they pretend to be, and limit how much you can see (if anything at all) and stop you from interacting until you sign up.</p> <p>Physical communities have been using online platforms to connect for a long time. Technologies such as email have provided open ways for anyone to be involved before the rise of social networks. The move to social media is logical because dedicated social networks provide more features and more powerful community building platforms. Unfortunately, as social networks have developed into their own fairly closed ecosystems, communities <a href="">have been closed off</a> from people who choose not to be part of those social networks. Some communities rely on social media so much that their social media presence is the only place they exist online. Membership rosters, events, and even organization documents might be inside a closed ecosystem.</p> <p>Although social networks such as Facebook like to remind us that “Your memories are important to us,” we don’t know how committed they actually are to that idea. On Facebook, communities have disappeared due to disgruntled users or political groups <a href="">abusing the “Report Abuse” button</a> or <a href="">for comments that weren’t their fault</a>. Facebook also makes changes to their news feed algorithm that make posts from fan pages and groups appear more or less often based on industry secrets. Just in the past month, <a href="">Facebook announced</a> they would roll out a news feed change that would change what shows up in news feeds, with a focus on friends more than groups and pages.</p> <p>Intentionally or not, this has some interesting effects. It gives the social network site the ability to control how real-world communities interact and what they see of each other by changing how often their posts appear—which is why a requirement of neutrality for online services is so important. It also forces more people to join the network to participate in even local communities, or to choose to be excluded if they don’t want to use the social network for any reason. Worse, it walls off information from the rest of the online ecosystem.</p> <p>The effect of walling off information extends beyond the network itself. News networks, local organizations, political figures, and all manner of other people and things encourage people to talk to them via social media. This can mean that it’s difficult to impossible to be heard by your local community, despite physically existing as part of it, if you’re a member of the wrong social networks. Or, maybe more interestingly, the section of your local community you can experience online isn’t one that matches your values. Demographics <a href="">differ a bit</a> between different social networks.</p> <p>Social media sites are generally free to run their service as they want to because other than outcry from their users and advertisers, there aren’t many hard rules they need to follow. We’re seeing social media giants—some of the most successful services online—break away from the principles of openness and choice that the Internet should provide. While this is a fairly recent issue, it only stands to get worse as services close themselves off more and wall off more information from the rest of the Internet. Should they develop political motives, go offline, or change their target audience, there is a lot of content and a lot of communities that could disappear from the Internet.</p> Sun, 04 Feb 2018 00:00:00 -0500 Your Internet Might Be Different From Mine <p>When we get online, we may not see the same Internet as others. All of us are living in what’s referred to as the “filter bubble,” or a manufactured version of reality produced by algorithms tailoring content to our interests. The effect produced is an echo chamber that reflects the beliefs social media and search engines think we hold. We may feel more connected and educated, even though the opposite is true.</p> <p>The online world has quietly evolved from one of openness and community to one divided by algorithms and artificial intelligence. Unfortunately, human nature makes <a href="">challenging our beliefs uncomfortable</a> so we haven’t noticed as our feeds changed to reflect us. Our social media is comfortable but <a href="">more divisive than we could have ever imagined</a>. It’s relatively rare for our feeds to show us opposing views, and our own behavior is a disincentive for sites to show opposing views to us because we’re less likely to click on them.</p> <p>The divisive nature of the filter bubble is most obvious when it comes to politics, especially with the 2016 election. A post titled “Why I’m Voting for Donald Trump” was shared over 1.5 million times <a href="">[infographic]</a> on Facebook. Another, titled “There are five living U.S. presidents. None of them support Donald Trump” was shared 1.7 million times <a href="">[infographic]</a>. Depending on which way Facebook thinks you lean politically, you likely only saw one or the other and only saw content related to the one you saw. This is why the fact that Hillary Clinton won the popular vote substantially and that Donald Trump became president were so shocking to opposing side. It’s also why some people <a href="">continue to believe</a> that Trump won the popular vote, though he didn’t.</p> <p>The degree to which sites change for different people varies. Facebook and other social media tend to be the most drastic, while search results on Google <a href="">tend to be the least effected</a> (though there are execptions). Personalization is no secret; Facebook <a href="">talks about it in their help center</a> and Google offers settings for <a href=";hl=en">news</a> and <a href="">ad</a> targeting. What we don’t know is everything that’s hidden, or why, as most sites keep their algorithms secret.</p> <p>The effect is interesting because it doesn’t only affect us as individuals (and where it does, the effects may not be as major, yet, <a href="">as we might think</a>). Pariser, the author of a study of filter bubbles, <a href="">points out</a> that some of the people most reliant on social media are journalists, and their own filter bubbles may be influencing what they write about.</p> <p>The fact is, we don’t fully understand the effect that social media and the filter bubble have now, and we don’t know how it will evolve in the future. Facebook, Google, and other sites could take steps to reduce algorithmic bias and to help us break out of our filter bubbles, but for now they don’t have much reason to. Until then, our filter bubble is ours to break out of because we know <a href="">we’re not always seeing both sides</a>.</p> Thu, 11 Jan 2018 00:00:00 -0500 We're Censoring Our Own Reality <p>The Internet is a place where freedom of speech reigns; except where it doesn’t. While everyone is free to share their thoughts, in whatever form they take, most sites impose limits on what can be shared. This isn’t necessarily a bad thing; unlawful or hateful content don’t share a platform with lawful free speech. Sites usually explain what’s acceptable to share in their Terms of Service or equivalent document normally written in more pages of legal language than any normal person would ever care to read. These limits don’t stop everything and sites rely on users to report unacceptable content, which invites human moderation or prompts the site to automatically take things down.</p> <p>Content moderation invites problems as much as it solves others. Algorithms that try to moderate content automatically are not perfect. YouTube, for example, has been <a href="">combating videos that are built to make it through their filters</a> for kid-friendly videos, but that have disturbing storylines or content that isn’t suitable for kids. When they do work correctly, algorithms aren’t necessarily unbiased; the software itself has no bias and is doing what it’s told to do, but the <a href="">people who develop the algorithms might be</a>. It’s not always intentional, either. Some things are not easy to measure directly, but might be measured indirectly; such as using a family history of crime to decide how likely it is that an individual would be to commit a crime in the future.</p> <p>We don’t really know how these algorithms decide what content is acceptable and what isn’t. They are, for the most part, trade secrets to the companies that use them. However, no matter how biased, manipulatable, or downright wrong they can be, they’re applied on a massive scale. Facebook, YouTube, and other services rely on these algorithms to decide what posts stay up and what gets shown in searches.</p> <p>However, algorithms are not the only moderation tool online. They miss things, as we know from YouTube’s ongoing battle. To compensate, sites also <a href="">rely on teams of humans</a> to check suspect posts or on their community of users to report posts. Reddit, for example, allows users to report posts to the moderators of communities and to Reddit itself, as well as to give negative feedback to content. With enough negative feedback, posts can effectively disappear from the site. This causes communities and even entire sites to develop a bias towards the beliefs of the majority of their users, contributing to the filter bubble effect. The algorithms that build individualized news feeds learn from this behavior as well.</p> <p>Content moderation, both algorithm and user-driven, can push anything offline from content to entire communities. Brigades of users have managed to get <a href="">Facebook groups and pages taken down</a> because they disagreed with them. This type of user-driven moderation is also taking down content uploaded from people trying to expose atrocities from places such as Aleppo. While this type of content can be gory and may be inappropriate for some users, dropping it from a site entirely may be making <a href="">evidence of war crimes disappear</a>. YouTube rolled out changes recently that took down over 900 channels that were documenting the civil war in Syria. Facebook has been removing images documenting atrocities committed by the Myanmar government as of September.</p> <p>With social networks removing content that users or algorithms find distasteful, we’re censoring the very networks that promise openness and global connection. While we worry—correctly—about ISPs and governments hiding content—we’re also doing it to ourselves. Worse, there’s little to no oversight to stop us, or the social networks we’re contributing to, from taking down things that are important.</p> Fri, 22 Dec 2017 00:00:00 -0500 The Net Neutrality Fight is not Over <p>Today, December 14th, the FCC passed the Restoring Internet Freedom proposal which <em>repeals</em> net neutrality rules in a 3-2 vote, to nobody’s great surprise. While the result is disappointing, this is not where net neutrality ends! We need to make a lot of noise and support organizations that are fighting on our behalf. The University of Maryland Program for Public Consultation and Voice of the People reports that <a href="">83% of Americans were against the repeal</a>. Popular opinion is on our side.</p> <p>Where we go from here is uncertain but it’s not over. Fights for net neutrality are already gearing up and lawsuits are being filed. The New York Attorney General announced he will sue the FCC, and other organizations will likely follow suit.</p> <h2 id="heres-how-you-can-help">Here’s how you can help</h2> <p>Visit <a href=""></a>. They’re still in the fight, and they’re now gathering support for Congress to overrule the FCC.</p> <p>Consider donating to the ACLU, EFF, FreePress, and other pro net neutrality organizations.</p> <p>Write to your government reps about net neutrality (this site helps! <a href=""></a>)—even if they’re on your side. Use ResistBot (text 50409) to fax them. Make sure to make an informed vote in your next election. Don’t forget about your local officials!</p> <p>If you have a municipal network, independent ISP, or mesh network in your community (or infrastructure that could support one of those), help fight for it in your community and get involved if possible.</p> <p>Make noise! Don’t stop talking about net neutrality online and off. Join protests if there’s one local to you. The worst thing we can do now is be quiet. We need to make sure that nobody forgets and our elected officials can’t ignore us.</p> <p><strong>Do it for the Internet we know and love. Do it for a fair, neutral, and open Internet.</strong></p> <p><em>Originally posted at <a href="[email protected]/message/EQFB3LQMU2QSSGKDPVIJUSAVOMSYWQTY/">[email protected]/message/EQFB3LQMU2QSSGKDPVIJUSAVOMSYWQTY/</a>. Minor edits have been made. More net neutrality resources are available at <a href=""></a>.</em></p> Thu, 14 Dec 2017 00:00:00 -0500 We Can't Tell When the Internet is Lying <p>We’ve long been told that we can’t trust everything on the Internet. At one point, that was a primary lesson taught to people new to the Internet. It turns out that we’re not very good at figuring out what online is true. In an effort to show more ads (and to keep people around longer) sites have made it harder to tell when something is true versus an ad. Many people have trouble telling when an image has been manipulated. With the rise of fake news, more people are confused or doubting real news, or simply care less about the truth as long as what the Internet claims matches their beliefs. Even the people we expect to be the most Internet-savvy are not good at figuring out what to trust in some cases.</p> <p>In a study of 700 men and women, only about 60% of participants were able to tell when a picture had been manipulated, which is only slightly better than guessing at random. Of the ones who were able to identify manipulated images, less than half could tell what in the image had been modified. That’s when people are looking for problems - in another study, <a href="">most high school students took photos at face value</a> without verifying them, even re-sharing them. This has real-world implications for how well informed we are. In several terrorist attacks, photos of the alleged terrorist have circulated, even driving sites such as Reddit to attempt their own community investigations. In several cases, the images circulated <a href="">were fake or completely unrelated to the attack</a>. In one case, the same image was circulated for two different terrorist attacks. More recently, a doctored image of Trump helping the rescue efforts in Texas after hurricane Harvey <a href="">was shared over 18,000 times</a> - the image had been edited from a 2008 photo from Iowa.</p> <p>We’re not much better when it comes to news and online advertisers are taking advantage of that. Sites run ads that pose as articles, something called Native Advertising. Most people <a href="">aren’t able to tell native advertisements from real articles</a>, according to a 2015 study even when they are marked as ads. However, most people feel that native ads hurt the credibility of the site that ran them, if they notice them in the first place. Similar statistics extend to people who have grown up with the Internet in their lives - middle schoolers <a href="">aren’t able to tell native ads from articles either</a>, and high schoolers couldn’t <a href="">tell a real news source from a fake one</a>.</p> <p>Fake news makes the effects worse in a way, by creating confusion about what online is true. People are aware that they should be cautious in what they trust but fake news and opinion pieces passing as news <a href="">leaves people doubting facts</a>. Unfortunately, this is in addition to the number of people who are willing to believe anything they read without verifying it. Somewhat worse, is that with more divided politics, people are willing to accept as fact things that may not be true, but are disliked or bad news for the opposing political opinion - <a href="">even if they are untrue</a>.</p> <p>All of this has real-world consequences, with incidents like the <a href="">“Pizzagate” shooter</a> who fired an assault rifle in a D.C. pizzeria in response to a Hillary Clinton conspiracy theory. The inability to identify fake images can impact court cases, which often use images as evidence. Without easy access to a neutral Internet, it’s much easier to get caught in an echo chamber of false posts and no way to know. Even some of the most trustworthy of sites run native ads (<a href="">including sites such as Forbes and The New York Times</a>), and sites assumed to be trustworthy have been fooled by fake news (<a href="">including Google</a>). With a neutral Internet, we can catch and correct, but with an Internet curated by ISPs or an online world controlled by large players in cloud services it’s much harder if not impossible to do so.</p> Tue, 07 Nov 2017 00:00:00 -0500 Who Owns Your Thoughts? <p>Targeted advertising and targeted news feeds are commonplace online. Services try to tailor what they show to keep you interested so you’ll spend more time on their site, and hopefully, click more ads. Using the information from how you react to what they show you, they build a profile of your interests to better tailor what they show you. Unfortunately, those services don’t have an interest in making sure you see a neutral view of the world or even that you see every post from the people or brands you follow. They’re interested in providing results that are relevant to you, whether or not they’re always correct, so they can gather more information and continue selling advertisements.</p> <p>Interest targeting takes a lot of different forms online. The most obvious is online advertising, which comes as no surprise to people accustomed to the Internet showing them ads for things they searched for recently. Much of the online economy and free-to-use services rely on this to fund websites. Some services sell ads directly, while others sell the targeting information itself. The same data is used by services to learn your tastes so they can suggest events, local restaurants, and other things to you. Data gathering for better targeting is expanding as tracking <a href="">moves towards mobile devices</a> that people carry with them, providing location and habit information beyond what can be tracked with a home computer.</p> <p>Targeting has made its way past ads and suggestions and into <a href="">news feeds</a> and to a degree <a href="">search results</a>. Services curate what they show based on what they think you’re interested in. The fact that they do this is likely unsurprising, but the degree to which they do it is much larger than is immediately apparent. Social media is similar, <a href="">hiding certain posts</a> from the things and people you follow if the site thinks you’re not interested. The resulting feeds can make it difficult to get honest information if the facts don’t match what a site thinks you believe. Worse, it can lead to an effect known as the <a href="">Filter Bubble</a> where sites confirm what they think you think, no matter if it’s grounded in any sort of reality or not. The sites you use may be choosing what you see without you realizing it.</p> <p>With some effort, it’s possible to escape your bubble. However, targeting has a deeper effect than trapping you in your own online reality. Targeted advertising, aside from its intended effects of convincing you to support or buy from a particular brand, can <a href="">manipulate your thinking</a> if you think it’s targeted. In a study, people shown an ad for something eco-friendly or sophisticated rated themselves as more eco-friendly or sophisticated, respectively, if they thought the ad was targeted to them. The caveat was that the targeting had to be at least slightly accurate to be effective, but it showed that ads have an effect even if we know—or at least suspect—that they’re targeted to us.</p> <p>By targeting users, or by pretending to target users, it’s possible to encourage a service or view over another. There are no neutrality regulations for online services so a site could push an agenda by showing only a certain type of ad. Sites that provide a tailored feed—including Google and Facebook—<a href="">keep their algorithms secret</a> so there’s no real way to know if they’re actively hiding something from you. This is dangerous for the future of democracy as more people rely on the Internet for news and information. The tailoring that tech giants market as an improvement to our online lives may actually be making it harder to be informed, especially since the tech <a href="">sometimes gets things wrong</a>.</p> Mon, 30 Oct 2017 00:00:00 -0400 Beyond Network Neutrality <p>Net neutrality is immensely important to keeping the Internet open for every voice and for ensuring that no ISP can curate what information its customers have access to. However, it’s not the only neutrality fight important to keeping the Internet alive. While net neutrality requires ISPs to provide access to the entire Internet and deliver every site equally, there are few neutrality requirements for online services themselves. As the world moves online and hosting services and social media become critical platforms for independent voices, <a href="">there is no guarantee</a> that those platforms will be neutral. With allegations around foreign interference via online platforms in the 2016 election and actions various cloud providers have occasionally taken to silence sites, the discussion of how online services can curate what their users see is building.</p> <p>Hosting and social media platforms have a lot of editorial power when it comes to what’s on their platforms. Services like Amazon Web Services (AWS), Microsoft Azure, Cloudflare, and others power large websites of all sorts of backgrounds. <a href="">A recent AWS outage</a> showed just how much of an impact AWS alone has on the Internet by bringing a huge number of sites down, including Netflix. Cloudflare, a different kind of service which promises certain security protections and performance improvements rather than hosting, provides services to <a href="">nearly 4.3 million websites</a>. All of these services have the ability to take entire services offline accidentally if they have an outage or intentionally if they decide to no longer do business with them.</p> <p>This has happened already; <a href="">Cloudflare dropped a neo-nazi website</a> that relied on its services this year (followed by the site being forced to move its domain registration out of the U.S. and eventually being pushed off the Internet entirely). Cloudflare’s CEO later suggested that the move was perhaps unwise, but that it was well within the power of companies to take such actions. It’s easy to side with Cloudflare and indeed, many celebrated Cloudflare’s actions which snowballed into the site going offline. However, the action raises a tough question of what powers online services should have. While taking away a platform for hate speech may generally be accepted, whether online platforms should be able to make that determination and whether they should have the power to silence something is up for discussion. Cloudflare’s CEO voiced his own views on the issue which boil down to saying web companies shouldn’t do what Cloudflare did, but that he still supported the action.</p> <p>The issue isn’t widespread as far as we know, but we see other censorship and targeting on a wide scale in other ways online. Interest targeting creates curated online worlds that vary from person to person. This is a widespread practice to keep users coming back and to improve the click rate of ads. Interest targeting isn’t necessarily malicious in intent as it’s primarily a side effect of algorithms designed to increase revenue. However, algorithms are not infallible. Even the most trusted of sites have had their services surface and promote entirely fake information, with <a href="">Google promoting conspiracy theories</a> and Facebook <a href="">allegedly influencing such stories</a> as they appear in its trending topics feature, where moderators allegedly suppressed conservative stories.</p> <p>Even if ISPs are required to be neutral carriers of data, curation of content by online services poses a problem. If the whole Internet forces something offline, it doesn’t matter how neutral your ISP is. In some ways, it’s more insidious because of how invisible and defensible it can be. GoDaddy, for example, dropped the aforementioned neo-nazi site explaining that the site was <a href="">violating their terms of service</a>, followed by Google with the same explanation. It takes very little to make a site disappear - simply dropping it to the second or third page of search results hurts a site tremendously, as 90% of people using Google <a href="">don’t venture past the first page</a>. If the right online services take issue with you, your voice can disappear and there’s not much you can do about it, if you even know what’s happening.</p> Mon, 16 Oct 2017 00:00:00 -0400 What does "metadata" actually mean? <p>One of the buzzwords around online surveillance and leaked NSA data collection programs has been the word “metadata.” The government doesn’t collect content of communication, just the “metadata” we’ve been assured, which seems to imply that collecting only metadata—though at times, far more than just metadata has been collected—is acceptable and respects our privacy. Unfortunately, “metadata” is a broad term and allows for a large amount of data collection. Not only that, but collecting metadata <a href="">is not subject</a> to regulations as stringent as those requiring warrants for wiretapping.</p> <p>The Merriam-Webster dictionary defines “metadata” as “<a href="">data about other data</a>.” Everything stored digitally has some sort of metadata associated with it, such as where it lives on a computer, who created it, who owns it, or where it came from. The <a href="">JPEG image format</a>, which is widely used by cameras and smartphones, includes the date and time a photo was taken, its location (as GPS coordinates, if available), the camera make and model that took it, and has support for adding arbitrary notes to a file. Often, metadata is almost completely invisible unless you’re specifically looking for it. Other times, it’s a core part of making things work. The “To” and “From” fields in an email <a href="">fall under the category of metadata</a> because they are data about the content—that is, where it’s intended to go and where it came from.</p> <p>When it comes to being invisible or keeping things private, metadata can be downright dangerous. It has helped law enforcement track files <a href=";pg=PA417#v=onepage&amp;q&amp;f=false">to specific individuals</a> who were breaking the law, such as in the case of Dennis Rader. In the case of Dennis Rader, that data came from a deleted Microsoft Office file which contained information that allowed police to determine who Rader was. Image sharing sites, which can seem fairly innocuous, have had problems around revealing the location and camera data embedded in photos. Imgur, a popular site for sharing images of all kinds, <a href="">attempts to strip that data</a> when pictures are uploaded as a measure to improve user privacy. The data can be used as a means of figuring out a photographer’s secrets to taking great photos, or can reveal where a person lives—including their home address—to strangers online. Exif data, which is the image metadata stored in JPEGs, <a href="">has been specifically listed</a> as one type of metadata collected by the NSA XKeyscore program.</p> <p>In addition to the information explicitly collected, metadata collected over time can reveal a lot about the context of someone’s activities. With a lot of data points, modeling a person’s behavior is possible. With information about the cell tower a person made a call from and the people they talked to, it might be possible to figure out <a href="">what someone was doing</a> especially with a lot of other data points to compare against. Over a number of years, call records can build a picture of who you talk to, who you’re close to, and when and where you talk to people. Theoretically, U.S. law protects U.S. citizens and limits how much information can be gathered without a warrant. However, a person’s network can include someone of interest to the NSA and it’s <a href="">difficult to determine</a> whether someone is a U.S. citizen based on their metadata. Without specific information about someone, it’s assumed that they’re a non-U.S. person and can be monitored freely. That’s not including, of course, the amount of data <a href="">collected domestically by accident</a>.</p> <p>With smartphones, we create more metadata than ever before with information tagged on images, emails, phone calls, and web browsing habits (because, yes, <a href="">the address of a website is metadata</a>). We create so much metadata, that a lack of it could be seen as suspicious. Even with nothing to hide, our privacy is at stake. We don’t know for sure how much data is collected, how it’s used, or how it’s secured. We might be sharing things we’re not even aware of and we may not know who is listening because in the past the government <a href="">appears to have simply ignored</a> data sharing rules, hacking and data leaks aside.</p> Mon, 04 Sep 2017 00:00:00 -0400 Opposing net neutrality threatens the viability of open source communities <p>The net neutrality discussion is, at its core, about free speech on the internet. Free speech online is a driving force for the online community; an average of <a href="">1.32 billion people each day share their voices on Facebook alone</a> (as of June 2017). It’s possible to be heard as well, with more than half of Americans using the internet as their primary source of information. Unfortunately, internet service providers (ISPs) want their own say in how free speech actually is online, with some <a href="">claiming their own rights to free speech</a> when it comes to what people can access.</p> <p>ISPs are serious about their free speech claims when it comes to net neutrality. Several ISPs and telecom associations have filed briefs with the U.S. Court of Appeals arguing that net neutrality prevents them from favoring their own services in order to send their own message. ISPs aren’t wrong in that part of their brief, however, because net neutrality does require ISPs to deliver all websites the same way, without any sort of paid prioritization or throttling.</p> <p>The idea of a telecom exerting editorial control over what parts of the internet can be accessed is deeply concerning. ISPs are not only a gateway to information, but many of them happen to own their own outlets which provide information.</p> <p>Violating net neutrality gives ISPs control over who can be heard, what can be accessed, and (potentially) what opinions can be held, and it isn’t necessarily obvious when they do. Research from Microsoft suggests that <a href="">slowing down a website by just 250 milliseconds</a> (the blink of an eye) makes users more likely to use a competing service, even though the speed difference is too small for humans to consciously notice.</p> <h3 id="open-source-intersection">Open source intersection</h3> <p>Such things could change the open source landscape drastically. Although open source software powers much of the modern world, <a href="">with 78% of companies running open source software</a> in 2015, that doesn’t mean projects won’t feel the effects of a more restricted internet. While larger organizations such as the Apache Foundation or Mozilla might fare okay in a world without net neutrality, smaller projects could be drowned out by ISP restrictions.</p> <p>Even those larger open source communities might find themselves becoming niche if they’re overshadowed by larger companies that can afford to sponsor data or exist in faster tiers. This could cause companies or individuals that would be otherwise willing to support free and open source software (FOSS) to choose a proprietary option due to better access.</p> <p>Zero-rating, ISP agreements, and throttling are already making this a possibility, with <a href="">a big-name ISP recently caught throttling Netflix</a>, and Netflix <a href="">making agreements with ISPs</a> to place servers on their networks for better performance. It’s much harder to argue for open source options when they come with an extra toll. This works in the reverse also, by making it harder to make meaningful open source contributions due to worse access, restricted reference materials, and limited data. Lack of competition in the ISP market may mean that, for most, a more FOSS-friendly option doesn’t exist.</p> <p>The good news is that the open source community can support net neutrality and alternate options for accessing the internet. Projects such as Tor, VPN technologies, and proxies make it harder for ISPs to track and restrict internet traffic, although they don’t avoid data caps. Other projects, such as community-developed mesh networks or municipal internet can provide more options for unrestricted access to the internet. Successful municipal networks have been developed in <a href="">at least 500 communities</a> in the United States (as of June 2017), including the often-cited Chattanooga, Tennessee, network. Community-owned mesh networks such as <a href="">PittMesh</a> in Pittsburgh, Pennsylvania, or the <a href="">Commotion</a> tool could operate on a wide scale with enough participants. Should net neutrality be overturned, these projects may be an essential part of getting online.</p> <p>This isn’t the first time we’ve had a discussion about net neutrality. The breakup of the Bell telephone monopoly was one major net neutrality battle that was fought and won, resulting in Title II and the phone network we’re accustomed to today. Cable TV was a net neutrality battle that was lost, giving us the cable networks—and their ability to drop networks that don’t agree to their terms.</p> <p>With strong net neutrality regulations and alternate options for accessing the web, the internet can stay a place where <a href="">freedom of speech reigns</a>. If we don’t fight for net neutrality now, we’ll see shrinking online communities, fewer choices, and less ability to make ourselves heard.</p> Wed, 26 Jul 2017 00:00:00 -0400 Keep Up with Net Neutrality - Preorder My New Book! <p>July 12, 2017 was the Day of Action for Net Neutrality. Across the Internet, sites showed banners, slowed themselves down, and tried to make it clear what a non neutral Internet might look like. It’s not pretty. A tiered, throttled, and restricted Internet would likely hurt your favorite web services, making them harder to access and more expensive.</p> <p>Net neutrality is one of the most important digital rights battles we’ve had so far. The battle is raging and doesn’t appear to be ready to end soon with the FCC suggesting it will ignore pro net neutrality comments and telecom lobbying kicking into high gear. Net neutrality matters to everyone because it prevents ISPs from picking and choosing what parts of the Internet - and what information - you have access to when you get online. If we lose net neutrality, ISPs will gain a lot of power over our ability to be informed.</p> <p>Net neutrality is so important, that I’m releasing a book about it on August 7, 2017. It details how some ISPs - maybe even yours! - have been working to undermine a neutral Internet and the consequences should they succeed. It’s currently available for preorder as an eBook for a discount until release. Make sure you know the details about net neutrality and that you’re on the right side of this digital rights battle.</p> <p>You can find store links and further details at <a target="_blank" href=""></a>. If you found my series of net neutrality posts informative or enjoyable, you’ll probably like this too. Help spread the word! Knowing is half the battle.</p> <p>If you want to take action in the net neutrality fight yourself, you can find out how you can help save the Internet <a target="_blank" href="">right here</a>.</p> Sat, 15 Jul 2017 00:00:00 -0400 Go Comment on the FCC Net Neutrality Proposal <p>This week, the FCC published its promised proposal to dismantle net neutrality rules. The proposal claims to “restore Internet freedom” by undoing Title II classification. In addition, the proposal asks whether net neutrality rules are necessary to protect the Internet at all. This comes after a court decision that refused to re-hear a challenge to Title II. Rather than focus on the benefits of Title II and net neutrality, as well as the fact that Title II has been found legal, it focuses on dissenting opinions which have in some cases been debunked (or that are generally accepted to be false) in order to explain unproven holes in net neutrality.</p> <p>The FCC head has called net neutrality a mistake, arguing among other things that there is plenty of competition in the broadband Internet market, that net neutrality would harm consumers, and that net neutrality would stifle innovation. Evidence to the contrary for all of those is prevalent. Very few people have more than two home Internet options available, and wireless service is mostly powered by only four networks (other carriers rent space on those networks). Net neutrality brings privacy and transparency requirements which protects ISP customers from their private data being sold and unfair pricing. Online innovation is credited to companies like Google, Netflix, and Facebook - not ISPs, who make up a minority of companies.</p> <p>Considering that the FCC has already ignored the previous round of comments, which were in favor of net neutrality, why is commenting worth the time? The FCC is not a typical legislative organization and is required to gather public feedback on their proposals (though many of them are things the normal person has no interest in). The FCC is required to take this feedback into account when making and voting on proposals - and the feedback is also taken into account when legal challenges to FCC regulations happen. While the commission appears likely to continue to ignore this feedback, there are multiple organizations preparing lawsuits should the FCC finalize their proposal for taking apart net neutrality requirements. In the ensuing legal battles, should Title II be voted down, the public feedback left on the proposal will be taken into account in the case against the FCC.</p> <p>Net neutrality matters to you because it prevents your ISP or wireless carrier from choosing what you can see, what services you can use, and whether you can share your own views. ISPs are powerful companies who could control your online world. Net neutrality may be one of the most important battles of the modern age when it comes to our access to information and our freedom of speech.</p> <p>The battle for net neutrality will not be a short affair - and hasn’t been a short affair. Join the nearly 2.6 million comments already on the proposal. Another vote will be held after the public comment period, so make sure your voice is heard - the Internet depends on it. <a href="">Help keep the Internet open, neutral, and competitive</a>. The FCC has implied that quality, not quantity is what’s most important to them in this round of commenting, so make sure to be precise and explain your views well. If you’re not sure what to write, that link has the option to use a pre-written letter.</p> Wed, 24 May 2017 00:00:00 -0400 The Fight for the Internet <p>In 2002, the FCC <a href="">classified Internet providers under Title I</a>, an “Information Service” classification. This is generally regarded as a win for net neutrality advocates but didn’t go as far as many wanted. Title I classification has been referred to as a “hands off” or “lite” classification in that it recognized that the Internet was an important means of communication but provided minimal regulations around that status. Specifically, Title I allowed the FCC some indirect authority to regulate interstate and international communications, but did not allow regulation of services themselves. Net neutrality advocates considered this to be too little. While the FCC promised that Title I would allow them to enforce net neutrality as needed, Title I by its definition did not allow them to follow up on those promises.</p> <p>ISPs (in particular, Verizon) fought Title I classification and <a href="">rightly won against the FCC in court in late 2013</a>. They argued that by the FCC’s own definition of the classification, the FCC was not permitted to regulate them. Verizon and others could not be fined for Title I violations that did not happen on an interstate or international level. The long-term outcome of the Verizon win was a formal declaration that ISPs would need to be classified under Title II if the FCC wanted to impose the regulations they promised. In 2015, <a href="">they did just that</a>.</p> <p>Title II <a href="">gives the FCC authority to protect ISP customers</a> from “unjust” practices such as discrimination against certain content types, among other things. It also gives the FCC the ability to enact policies that would encourage and expand competition in the Internet provider industry. Both of these are key for protecting net neutrality.</p> <p>While many articles surfaced claiming Title II was the nightmare of ISPs and ISPs themselves fought against it, there was minimal business impact. Verizon executives <a href="">are on record</a> saying that Title II would have no impact on their infrastructure investments or larger business. Sprint even came out in favor of net neutrality regulations, making a departure from other ISPs. To put it loosely, life went on but with a safer Internet. For service providers, it was business as usual - just with a few extra requirements for transparency and regulations around what they could and could not do to traffic on their networks. That’s not to say that telecoms are not continuing to fight against net neutrality.</p> <p>Due to the ongoing fight, net neutrality has never been fully secure. Despite advocates’ best efforts to be heard which in both 2016 and this year have crashed the FCC website, Congress and some members of the FCC remain opposed to net neutrality. While disagreement is a healthy part of a democracy, the public opinion is that the Internet <a href="">should have neutrality protections</a>. In 2016, millions of comments were submitted to the FCC in support of Title II classification. Just this month, the FCC website experienced problems yet again for more than a day due to the volume of comments (the filing period is still open, <a href="">and you can submit your own comments</a>). Despite that, Congress has in the past voted to strip some of the FCC’s regulatory powers and the current FCC itself has laid out a plan to tear apart net neutrality.</p> <p>From a non-legislative side the fight against net neutrality continues also. An ISP group is <a href="">currently running a misleading ad campaign</a> claiming their support of net neutrality but without Title II classification, suggesting that the two could not be equated. Verizon <a href="">published a video interview</a> with their legal counsel explaining that net neutrality was at the forefront of their priorities but that the legislation for it is a mistake. There are cable-industry run websites that attempt to make a case for why net neutrality is a problem. Claims were made that ISPs would remain committed to their customer privacy and to competition. However, telecoms <a href="">are the ones fighting privacy and neutrality regulations</a> that they claim to be in favor of. That’s not to mention that Title II is net neutrality, according to the court ruling in FCC and Verizon over Title I. Worse is the fact that telecoms have already blatantly violated net neutrality principles in the past and now have the technology to do far worse in less obvious ways.</p> <p><em>If you care about a neutral and open Internet, you can join the net neutrality fight too. <a href="">Get started here</a>.</em></p> Thu, 11 May 2017 00:00:00 -0400 Online Surveillance Briefing <p>Along with the free flow of information the Internet provides, the Internet has also been a powerful means of government and corporate surveillance. Learning a lot about someone is not a difficult task online even based only off of public information, which some websites even compile and sell access to. People such as Richard Stallman <a href="">have been vocal about governments keeping closer tabs on their citizens via the Internet</a> and through other means. For a long time, the theories of people like Stallman seemed plausible but unlikely and were pushed aside as nothing more than conspiracy theories. It wasn’t until the Snowden leaks that those ideas were proven to have some validity. It turns out that even the worst of Stallman’s suggestions about surveillance <a href="">are true</a> and that many government agencies use wide-reaching data capture programs to collect and store information about pretty much everyone.</p> <p>Most people are aware that publishing personal information to the Internet is a bad idea. However, it’s hard to denounce surveillance that uses those public sources of information—if the rest of the Internet can see it, then so could the government. Unfortunately, Snowden revealed that it goes deeper than that. As more data gets routed over digital networks - think <a href="">phone calls and text messages</a>, things people generally don’t consider as being “online” - the possibility for more data capturing gets bigger as well. The NSA has claimed that they only target suspected terrorists or people coming and going through international borders Unfortunately, their systems, from the leaks we’ve seen, are not nearly so targeted in their surveillance. There <a href="">is evidence that activity of normal law-abiding U.S. citizens is routinely scooped up as part of these data collection programs</a> and that it data of them is collected more than the data of people actually targeted by the program.</p> <p>As a law-abiding citizen, why should you care? You have nothing to hide from the government. Unfortunately, it’s not quite so cut and dry. The problem with these data collection programs is that <a href="">we don’t actually know</a> the entirety of what they’re used for and how they actually affect U.S. citizens. The NSA, in a single year, had <a href="">2,776 violations of policies</a> around accessing collected data. What’s worse is that the security of these government resources is imperfect. The information built up over time, even by accident and even for law-abiding individuals, is a massive treasure trove for hackers. The U.S. government has <a href="">had large data breaches in the past</a> which shows that even the government has problems protecting data. Whether the general public trusts the government with this data is questionable. As of a 2009 survey, only 19% of Americans <a href="">trusted the government to do what was right</a>.</p> <p>The NSA has made the claim that only “metadata” is collected and only on a very targeted basis. “Metadata” may not contain the actual content of communications, but it <a href="">contains a lot more information than those collecting it might have you believe</a>. With only metadata, it’s possible to track where a person goes, who they talk to and who they have relationships with, who they do business with, and their general routine. That’s enough information to stalk someone, blackmail them, or to know when they’re not home to stop a burglary.</p> <p>The amount of data collected stands to expand drastically with the explosion of the Internet of Things—devices that collect all sorts of seemingly mundane information about your life to make your life easier. These devices are making their way into everyday life and are always present, uploading information to various places to provide their features. Phones are already ubiquitous and in other countries <a href="">have used to track protesters</a>. And we know now, that there is far more to this than the ramblings of Stallman. While online surveillance has been a relatively quiet battle, especially while the more immediate issue of net neutrality rages, the fight against online surveillance has too been raging. Companies like <a href="">Let’s Encrypt now allow anyone to send their websites over an encrypted connection</a>, others <a href="">such as Google have started encrypting data in transit</a> over their own networks, and still others like <a href="">the EFF</a> are fighting for the right to privacy online. The fight against online surveillance is part of the fight for democracy, human rights, and the open flow of information in the digital era.</p> Fri, 28 Apr 2017 00:00:00 -0400 What Net Neutrality Is Not <p>Net neutrality includes a number of additional or enforced restrictions on Internet service providers to prevent them from prioritizing some content over others. Regulations such as these are essential for making sure the Internet is an open flow of information; that is, that ISPs are not gatekeepers to information. Internet providers often <a href="">argue that net neutrality rules would stop them from expanding and improving their networks</a> by removing their ability to force upstream service (like say, Netflix) to pay them for their traffic to be fast and reliable. Wireless providers have even suggested that they should be exempt from net neutrality guidelines <a href="">because bandwidth is more limited over wireless networks</a>. These arguments are not necessarily invalid. Service providers still need control over their own networks so they can continue to grow and evolve to support a changing Internet. However, net neutrality and an expanding Internet can co-exist and without making access more expensive.</p> <p>On most modern networks, different types of traffic have different needs and need to be prioritized differently to guarantee everything works reasonably well. Your phone call, for example, should not drop because someone on the same network decided to load up Facebook. Prioritizing traffic so that your phone calls can coexist with gaming and web browsing is called <a href="">traffic shaping</a>. Traffic shaping is important as more things - such as phone calls - are pushed to move over the Internet. Some networks route phone calls over the same network as data, with <a href="">something called Voice over LTE (VoLTE)</a> to provide better call quality. Other networks, such as Republic Wireless, use VoIP (voice over IP) to <a href="">route wifi calls over the Internet</a>. Prioritization of VoIP (or gaming, or streaming, etc) traffic does not violate net neutrality practices. Net neutrality involves regulations that prevent prioritization of websites - no matter how bandwidth hogging. So, while service providers can do the required prioritization of traffic for different types of traffic (traffic shaping), they cannot serve different websites at different speeds (content shaping) <a href="">due to strict rules regarding what reasonable</a>. Information can still be accessed equally but different types of traffic can be managed as needed. Equal access to information is what net neutrality provides - not restrictions on how a network can be managed.</p> <p>Worth noting is that net neutrality regulations <a href="">do not prevent Internet providers from protecting their networks against malware or other illegal activity</a>. Providers would still be able to block or throttle (slow down) illegal activity on their networks. Other regulations already pertain to such activities.</p> <p>Despite their general opposition to net neutrality, some ISPs have actually supported the principles of it. <a href="">Verizon and AT&amp;T in 2008 agreed with keeping their broadband open with regards to net neutrality</a> - although they were opposed to applying net neutrality to wireless networks. At the time, this made sense - mobile data was a relatively new technology in 2008 that had far more limitations. While wireless networks still have limitations, they do not need exception from net neutrality rules. Verizon is <a href="">considering using 5G wireless instead of laying cables</a>, and <a href="">uses Voice over LTE for some phone calls on its network</a>. Those two facts alone show that bandwidth is not at nearly of a premium as they would still suggest it is. What’s more, Verizon, T-Mobile, Sprint, and AT&amp;T are now aggressively marketing unlimited data plans and virtual carriers that use their networks, such as Ultra, now <a href="">provide similar unlimited data</a>. While the Internet has grown substantially since 2008, especially with streaming video from services like Netflix, networks have been able to keep up even with a mostly neutral Internet. Internet providers would have customers believe otherwise with data caps, but by their own admissions data caps <a href="">have nothing to do with actual network limitations</a>.</p> <p>Net neutrality does not mean making the Internet free at taxpayer expense. There are already subsidies in place for low-income households to gain access, <a href="">which in 2016 were expanded</a>. These subsidies are not related to the ongoing net neutrality fight. Net neutrality is about equal access to the Internet through a connection one is already paying for, if they’re able to afford it. This means that no matter where you go online, be it at home, on the go, at a public library, or anywhere else, you can expect to be able to access the same sites at the same speed (limited of course only by the connection speed). This is a similar expectation to using a telephone at any of those same places - equal ability to call anyone on Earth no matter where the phone is. The exception to this is areas that choose to implement municipal broadband. In those cases, there may be taxpayer expense for installing and maintaining a municipal network. Municipal networks help the cause of net neutrality but are not required for a neutral Internet.</p> <p>Net neutrality is still evolving with regards to legislation. Partly due to that and partly due to marketing and lobbying efforts, there are misconceptions about what net neutrality is and what it costs. The Internet continues to evolve at breakneck speed and is an integral part of doing business in the modern world. Making sure that evolution can continue while protecting access to information online is the point of net neutrality. As more people rely on the Internet for finding jobs, doing business, and staying informed about the world, ensuring ISPs do not become gatekeepers to information is extremely important.</p> Thu, 20 Apr 2017 00:00:00 -0400 What a Non-Neutral Internet Might Look Like <p>Verizon and Time Warner Cable (now Spectrum after a merger) have stated that they are committed to an open and unfettered Internet. However, recent practices such as zero-rating have started to bring those statements into question and show the possibility of the first cracks in net neutrality in the U.S. Already, services that provide access to only a small collection of websites exist. We can also look to the UK, where one ISP has taken things further by throttling (slowing down) certain types of traffic, imposing data caps, and selling Internet packages with varying privileges. This bears a strong similarity to how cable TV is sold, where the number of channels you can watch depends on the cable package you subscribe to, and where some networks such as HBO are often available only at an extra cost. China, which is well known for its restricted Internet, is another example of what a non-neutral Internet can look like.</p> <p>Although net neutrality <a href="">has improved since</a>, UK laws at one point allowed broadband Internet providers to impose any limits on Internet connections as long as they were transparent about the limits they had in place. In 2009, some UK providers took advantage of those laws to develop heavily restricted Internet packages. One major provider, called BT, <a href="">slowed down things including streaming video, much to the annoyance of the BBC which had a new streaming service in place for streaming BBC shows online</a>. They also throttled other services and would cap data and speeds of what they classified as “heavy users”. There were three plans offered to BT customers. The first allowed 10GB per month of data use with heavy throttling (for perspective, <a href="">in 2012 the average monthly Internet use in the U.S. was 52GB per month</a>). It also limited monthly video streaming. The second plan had 20GB of data allowed, still with heavy usage throttling. The third was an unlimited plan which still included throttling for “heavy use”. Of course, with a better plan came a higher monthly cost. The alternative was to switch to another service provider, which in the UK was less of a problem than it is in the U.S. because the UK has much more competition when it comes to Internet service. Other than the unlimited plan, the plans offered by BT would not have provided enough data for the average U.S. household in 2012, and would have made services such as Netflix a rarity due to the limits and throttling of streaming video.</p> <p>China, which is well known for its heavily restricted Internet, is another demonstration of a non-neutral Internet. Internet users in China are separated from the rest of the world by what’s known colloquially as “The Great Firewall of China” which <a href="">as of 2015 blocked access to some 3,000 websites</a>. The list of blocked websites included Google, Yahoo, and Twitter as well as a variety of news sites and other services. China’s Internet is so restrictive (and due to alleged state-sponsored hacking) that in 2010, Google <a href="">even considered shutting down their operations in China</a>. In fact, this website ( was blocked in China for a while - and after the posting of this article may be blocked again because websites that criticize the government or Chinese censorship are typically blocked automatically based on their content (you can check if it’s blocked <a href="">here</a>). Certain things <a href="">such as mentions of Tiananmen are a surefire way for a site to get blocked</a>. In order to provide a relatively modern Internet, there are state-sponsored social media sites as alternatives to the Facebook and Twitter of the rest of the world. While The Great Firewall of China makes it difficult to access a lot of information that the Chinese government deems distasteful, it isn’t perfect. Using services such as VPN - which have been blocked on and off as well - it is possible to access websites that are blocked. However, doing so can attract the attention of authorities. What can make these restrictions more frustrating is that certain sites are sometimes allowed and other times are not depending on where in China you happen to be and depending on current events in the world.</p> <p>In the U.S., it’s unlikely that websites would be all-out blocked on a non-neutral Internet. In particular, due to freedom of speech and freedom of the press guaranteed by the Constitution, it’s extremely unlikely that there would be widespread state-sponsored censorship of the Internet. However, service providers can encourage the use of other services through data caps and zero-rating or sell packages of websites or services. That would mean that instead of buying a speed of access, you might buy a “gaming” package for an extra cost or a “investing” package for access to financial news sites. This is something that is already happening from U.S. based companies. Some services like Facebook Free Basics (formerly - which are not sold in the U.S. currently - provide access only to a list of specific websites. If larger ISPs start to provide similar service tiers, there is no escape due to the lack of competition. Despite their claims to the contrary, mainstream Internet providers have already started to do this. Verizon and AT&amp;T have been named in net neutrality lawsuits for encouraging the use of their own services over others. What’s worse is that the FCC no longer wants to protect consumers from these practices, so a non-neutral Internet may be coming.</p> Sat, 08 Apr 2017 00:00:00 -0400 Your Internet Versus Your Privacy <p>While many of us are generally aware that various sites track us in order to sell advertisements, we usually don’t give much thought to whether our ISP might be collecting similar information. The expectation of privacy from an Internet Service Provider is important because they are the gateway to the Internet. No matter how many anti-tracking browser add-ons you might have, your Internet provider can still see what you visit online. The only way to avoid your ISP seeing what websites you visit is to use a service such as a VPN - which is sort of like paying for a secure gateway to the Internet somewhere else, that your ISP can’t see. This means that unless you are willing and able to pay for privacy, your Internet provider likely knows more about you than you’re comfortable with.</p> <p>As being tracked online becomes pretty much ubiquitous, <a href="">privacy has become more valuable</a>. Almost everything on the Internet has some form of tracking installed. Even this website ( uses a very common tracking tool called Google Analytics, which makes it possible to drill down into all kinds of information about visitors. Other websites gather data about visitors that they sell or use directly to target advertisements to the people who seem most likely to click on them. That sort of data is extremely valuable on a wide scale - <a href="">the U.S. was worth over $2.8 billion in advertising revenue to Facebook in 2016</a>. What’s worse, is that this tracking isn’t limited to a single site - <a href="">visiting any website with Facebook “like” buttons is enough for Facebook to track where you’ve been</a> - and there are many other services that do the same thing. This is often not a known problem for the layperson, but it can become one when a quick search results in a month of banner ads for something embarrassing (or amusing to the people sitting nearby on public transit). There are relatively easy ways to prevent this sort of tracking - <a href="">browser add-ons such as ublock can be installed that block most tracking services</a>.</p> <p>Since access to the Internet is more or less the normal state of affairs for many people, it’s easy to forget <a href="">just how much information can be gathered by a service provider</a>. While encryption helps, it’s still possible to see what websites are visited and how often. Hiding this usually requires buying access to a VPN service to hide traffic from your Internet provider or using a technology like Tor. At some point, the service you access the Internet through needs to know where to send your traffic. Your connected devices need to reach out to the Internet to fetch updates and other information and seeing what they talk to makes it possible to figure out at a minimum who made them. This information can be used to determine all sorts of things about a person’s political views, income, health, and even when they’re most likely to be home. Should this leak - either through hacking or from the highest bidder - this opens up a lot of potential problems. It makes it possible to trick people into giving their information to the wrong website (phishing) and even opens up burglary possibilities.</p> <p>Net neutrality regulations improve online privacy because they can help to restrict what ISPs are allowed to track in Internet traffic. By forcing Internet providers to treat all traffic equally, there is less reason for ISPs to examine the traffic passing through their network for tracking purposes. In the same vein, it makes defending any such traffic inspection much more difficult. There are valid and necessary reasons to inspect network traffic. ISPs <a href="">need to ensure the security of their network against malware, hacking attempts, and illegal activity</a>. Completely forbidding ISPs from looking at traffic would be very bad for the health of the Internet. However, other than for tracking individuals, there is <a href="">very little reason to keep records of traffic content</a> and of course, even less reason to sell them.</p> <p>Making privacy a commodity introduces yet another split between the informed elite who can pay for equal access to information and privacy, and those who can’t. Already, <a href="">there are problems with accessing the Internet at all in the U.S. with price being the main reason people don’t have an Internet connection</a>. What’s worse still, is that being able to afford privacy does not make privacy accessible. Knowledge of how to use a VPN or Tor requires some technical knowledge, which not everyone has. As we work to make the Internet equally accessible to everyone, more people are opened to the perils of their data being sold to the highest bidder or hacked, simply due to not knowing that protection is needed. The fight for equality needs to extend beyond the physical world and into the digital one, especially as the two mingle more and more.</p> Thu, 30 Mar 2017 00:00:00 -0400 You Already Paid for Net Neutrality <p>As with other infrastructure projects, taxpayer dollars have been granted to Internet providers for the purpose of expanding and upgrading their infrastructure. At a high level, this is fine because Internet is an important and arguably critical service in the modern world. Ensuring networks are up to modern standards is important for providing access to information, education, and other services. However, the network improvements expected from many of these grants have never materialized. Grants and subsidies amounting to over 400 billion taxpayer dollars by some counts have rarely resulted in larger or better networks. Despite those grants, there are still people in the U.S. who <a href="">do not have Internet speeds available to them that are usable for accessing modern websites</a>.</p> <p>ISPs have argued that if net neutrality dies and the regulatory schemes they support come to pass, <a href="">they will then provide Internet speeds that are competitive</a> with the rest of the world. In late 2015, the United States <a href="">ranked 42nd in average Internet speeds</a> out of 55 countries ranked by Akamai (<a href="">a huge cloud services provider</a>), despite being the country that is home to some of the largest online companies. The rating puts the U.S. below the global average for Internet speeds. The problem with the ISP argument that network improvements will come with a non-neutral Internet is that until recently, there has been little with regards to enforced net neutrality regulations. ISPs already have a rocky past with regards to keeping promises of network improvements, net neutrality or not.</p> <p>In New York, New Jersey, and Pennsylvania, Verizon <a href="">promised expansions to their fiber network (FiOS) in return for government subsidies and benefits</a>. However, while the government benefits were provided, at massive taxpayer expense, Verizon <a href="">never expanded their networks</a>. In New Jersey, the company even had its employees help it convince the New Jersey Board of Public Utilities <a href="">that DSL and the Verizon LTE network qualified</a> for meeting the terms of its agreement. LTE service from Verizon (and from most providers) is expensive, has data caps, and compared to fiber is extremely slow. Verizon even argued that the terms of the agreement did not require them to actually connect anything to the fiber they did install, just that they needed to run fiber down the street in front. What this means for taxpayers is that although they paid for fiber Internet service to be expanded to schools, libraries, and in some cases themselves, <a href="">they got no return for their money</a>.</p> <p>In West Virginia, <a href="">which in 2016 was still ranked almost last in the country for broadband Internet access</a>, Frontier was accused of misusing millions of dollars of Federal funds for Internet improvement. $40 million was intended for building a network that would improve ISPs’ ability to <a href="">provide service to some 700,000 homes and over 100,000 businesses</a>. Lawsuits from 2014 <a href="">allege that Frontier claimed that they had installed almost double the fiber they actually installed</a>. At the same time, Frontier <a href="">inflated their prices by overcharging for administrative activities and vehicles</a>, in amounts that were at times more expensive than the actual construction itself. Even worse, other lawsuits allege that Frontier used the federal funding to <a href="">further their monopoly on Internet service</a> by building what’s called a last-mile network to homes for their own service, rather than building a shared network for multiple ISPs to serve homes. In fact, after the Frontier project was completed, West Virginia ranked even worse for Internet access, at 53rd (including Guam, Puerto Rico, and D.C.) from 48th.</p> <p>In all, the U.S. <a href="">has spent some $400 billion to deploy fiber broadband connectivity</a>. Despite those investments, the U.S. is far from first in the world when it comes to Internet service. Despite hundreds of billions of dollars in government help to improve networks, ISPs still allege that they need to raise prices, cap and throttle data, and sell Internet in a similar fashion to cable. The fact that Internet providers can’t be held even to promises they made to local and federal governments underscores the need for better regulation and net neutrality. As taxpayers, we have paid thousands of dollars for Internet service improvements that have never happened, not to mention money spent directly paying for Internet service. Without regulations and enforced net neutrality rules, this stands to get worse.</p> Thu, 23 Mar 2017 00:00:00 -0400 How Net Neutrality is Being Undermined <p>By paying for an Internet connection - almost any Internet connection - it’s possible to get access to every piece of information and every viewpoint on Earth. It’s also possible through that connection to publish your own views across the Internet for no extra cost. What comes with this are certain privacy protections from your ISP (Internet Service Provider), which is important given that your Internet provider can see most of the things you do online. However, none of these are guaranteed rights. The FCC has minimal powers to enforce net neutrality, <a href="">thanks to a previous rule change by Congress</a> and currently calls net neutrality “a mistake.” Service providers also have the ability to discourage the use of certain services through practices like zero-rating and data caps, which starts to limit your online world to that which your service provider approves of.</p> <p>In the past, one of the biggest problems faced by net neutrality from a legal standpoint was a lack of enforcement. FCC penalties for violating net neutrality regulations have been <a href="">fairly minimal where they are listed at all</a>. Previous legal precedents even say that ISPs <a href="">don’t need to pay fines they weren’t warned about</a> - which means even when the FCC chooses to impose penalties, because there are no specific penalties in the regulations, ISPs can easily avoid them. Service providers already work around those penalties by carefully wording practices like zero-rating, explaining data caps as network constraints (<a href="">they have nothing to do with that</a>), and by lobbying Congress. The current FCC has worse ideas about net neutrality regulations. The current FCC head <a href="">has referred to net neutrality as “a mistake”</a> and has already <a href="">started to dismantle privacy</a> and <a href="">transparency requirements</a>.</p> <p>Zero-rating is <a href="">already in play from Internet providers</a>, which encourages customers to use one service over another. By making agreements with content providers such as Netflix or the NFL, providers offer access to online content that doesn’t count against data limits. It’s hard to complain about free data (if you happen to use the zero-rated services), which makes this practice particularly nefarious. In general, it looks good to consumers but it helps open the door for an Internet where providers <a href="">discourage using services other than the ones they zero-rate</a>. By encouraging the use of their own services over others, Internet providers create a walled garden which means the access you have to information is determined by which ISP you have access to. In a world of apps for every possible purpose, this can also mean Internet providers are able to control what features apps are able to provide by blocking access to some of the online resources they can access.</p> <p>ISPs taking control over what online services people can access is happening with services like Facebook Free Basics, which <a href="">provides access only to a collection of websites approved by Facebook</a>. Other free Internet providers such as Google Fiber have yet to directly do anything against net neutrality, but in a non-neutral Internet it would be within their rights to offer the same type of curated service. We can already see the effects of this walled garden approach to Internet access by looking at countries where Facebook Free Basics is a major service provider. <a href="">Millions of Facebook users connected with the service don’t know there is an Internet beyond Facebook</a>, according to a 2015 survey.</p> <p>Something that makes net neutrality a difficult battle is that a non-neutral Internet is profitable for both ISPs and massive online services. While Google, for example, may claim they support net neutrality, they have little to lose if net neutrality is overturned and <a href="">appear to only be making minimal efforts to support the cause</a>. Large services and content providers can afford agreements with ISPs to provide better access to their services, <a href="">something that is already happening</a>, while small competitors don’t have the same resources. This means it’s far more difficult to start a new online service, while existing services continue to grow. No matter how net neutrality is undermined or what form a non-neutral Internet takes, consumers are the ones who lose because it’s their access to information, choices, and reasonable prices that is in danger.</p> Thu, 16 Mar 2017 00:00:00 -0400 Protecting Net Neutrality with Regulation <p>In 2015, <a href="">broadband was reclassified under what’s called Title II</a>, which classifies it similar to phone service. This means that Internet Service Providers are not allowed to (among other things) “make any unjust or unreasonable discrimination in charges, practices, classifications, regulations, facilities, or services.” This is a good thing which protects ISP customers (which can be households, businesses, or even other ISPs) although Congress and the FCC could, and should go further by also using Section 706 or better, by creating Internet-centric legislation. Some service providers have been against Title II and Section 706, but interestingly Sprint <a href="">has spoken out in favor of it</a>, and Comcast <a href="">has admitted that net neutrality is not the problem ISPs have made it out to be</a>.</p> <p>The current Title II classification <a href="">gives the FCC authority to regulate ISPs in order to protect their customers</a> from “unjust and unreasonable practices.” This prevents service providers from discriminating against different content types for any reason (among other things). When ISPs own or are owned by media companies <a href="">as is the case for many Internet providers</a>, allowing them to give paid prioritization (“fast lanes”) or zero-rating to the services they want to encourage their customers to use is dangerous. In an ideal world, the media would be totally neutral, but as things are now, many media outlets lean in one political direction or another. In instances like this, a third party needs to step in to make sure all Americans have equal access to all viewpoints.</p> <p>Section 706 <a href="">allows the FCC to to use regulation to promote competition in the market</a>, which is another important piece of net neutrality. Service providers are capable of offering better service at far better rates, but because they actively avoid competing with each other, there is no economic driver for them to do so. This is apparent with the current battle between cell carriers to offer the cheapest “unlimited” data plan <a href="">[1 - T-Mobile]</a> <a href="">[2 - Verizon]</a> <a href="">[3 - Sprint]</a> <a href="">[4 - AT&amp;T]</a>. There are no network capacity problems that prevent them from doing so, <a href="">based on their own admissions</a>. This sort of competition is good for consumers who then have more options and better prices. In the wired world, service providers <a href="">quickly offered much faster packages at much lower prices when Google Fiber was expected to become available</a> in their area (which at one point offered a free home Internet package). When sudden competition can drive down costs so much, it’s clear that the current free market system is not working as it should. The ISPs get wealthier, while their customers find higher prices and lower quality of service.</p> <p>The legislative battle for the Internet is similar in many ways to legislative battles over cable TV and telephone, which provide a way to see some of the possibilities of a future with and without net neutrality. Telephone service over the majority of the United States was <a href="">provided by a single company called Bell System (known as Ma Bell) that exerted full control over the telephone lines</a>. They were effectively a monopoly. In order to connect something to a telephone line, even in your own home, you needed approval from the phone company. Often, you rented your telephone equipment from the same company. The company was broken up in the 1980’s and several <a href="">regulations followed that provide a more open and non-discriminatory system</a> that we enjoy today. Cable TV, on the other hand, <a href="">went the opposite direction</a> giving us service split into “packages” of different channels which sometimes disappear due to disagreements between cable providers and the broadcasting network - the reason why <a href="">WFSB is currently unavailable to Optimum customers</a>, and why <a href="">The Weather Channel disappeared from DirecTV</a>. A non-neutral Internet is likely to look more like cable TV with similar disputes and higher prices than the Internet we know now.</p> <p>Unfortunately, the FCC is limited in its powers to ensure that net neutrality regulations stay intact, and in its ability to enforce them. Indeed, the current FCC chairman has already described net neutrality as a “mistake” and even <a href="">quoted Star Wars Emperor Palpatine when to criticize the enactment of the current rules</a>. Congress also voted <a href="">to weaken the FCC’s powers to enforce rules</a> such as Title II classification. To be fair, current laws are not designed with the Internet in mind. However, in the absence of Internet-specific legislation, we need to be proactive about making sure access to all information is provided equally and fairly to everyone.</p> Wed, 01 Mar 2017 00:00:00 -0500 Your City Can Provide Better, Cheaper Internet <p>Municipal Internet (or, depending on the technology, “municipal fiber”, “municipal broadband”, or “public broadband”) <a href="">is Internet service provided partly or entirely by by local governments</a>. Usually, the networks providing service as well as their backing companies, exist on a local level rather than national. Being run with involvement from the local government means such services can provide access to the web at a much lower cost or in some cases even for free while being better tailored to the local community’s needs. Areas that have built successful municipal networks often are able to provide more equal access to the Internet and better connectivity which can be a driving force in attracting high tech companies, economic development, and job growth.</p> <p>In general, municipal Internet services are less expensive and much faster than their telecom alternatives which are the reason they can have dramatic impacts on their communities. An oft-cited example of a successful municipal network is in Chattanooga, Tennessee. There, an Internet provider option is EPB, which is an electric company. EPB offers a 100Mbps base Internet package <a href="">which is ten times faster (or more) than the majority of the Internet access in the United States as of 2015</a>. The price was half of what one other big-name ISP charged for similar speeds in certain areas, but without data caps which raise the cost-per-gig of service substantially.</p> <p>Less expensive or not, the idea of paying for a network you might not use through additional taxes is a frequent complaint about municipal networks. This is a fair concern because the price tag of a successful network, such as Chattanooga’s, isn’t tiny — <a href="">Chattanooga’s network came with a $330 million price tag</a>. Failed networks, such as the <a href="">UTOPIA project in Utah</a> and <a href="">another in Philadelphia</a> have come with large taxpayer expenses as well. However, not all municipal Internet projects are taxpayer funded — take Sandy, Oregon’s which offers similar service as the network in Chattanooga but was funded by revenue bonds. Others, like the fiber network in Monroe County, New York (which currently is limited to government use) were built as tax-efficiently as possible, <a href="">piggybacking on existing construction projects</a>. Successful networks provide a good return on the investment no matter how they’re funded, which we can again look to Chattanooga for an example of.</p> <p>Chattanooga has experienced what has been referred to as an “economic rebirth,” transforming them from a fairly no-name city to a technology hub. A boom in business has brought the city’s unemployment rate down from 7.8% to 4.1% alongside an increase in average wages. A local tech incubator formed with the promise of fast, inexpensive, and reliable Internet. And, the city’s downtown residents doubled with housing that in some cases offers free access to the Chattanooga network as an amenity. Of course, more equal access allows for people to be more informed and better educated, with access to news, online courses, and everything else the web has to offer.</p> <p><a href="">At least 500 communities in the U.S. provide some form of municipal network</a>, including Chattanooga, Tennessee and Sandy, Oregon. Not all local governments are equipped or willing to build municipal networks, however, but volunteers can sidestep them to build their own.</p> <p>Taxpayer funded or otherwise, there are clear benefits to public broadband and there are successful examples of how to implement such a network. There are ways to cut the costs of building a reliable network by using wireless technologies or mesh networks. Regardless, better access provides better opportunities and better equality. If nothing else, a municipal network adds competition for larger ISPs which can help drive down pricing and improve service even for those not using the municipal option.</p> Thu, 23 Feb 2017 00:00:00 -0500 Innovation and a Neutral Internet <p>The debate on the role of net neutrality when it comes to innovation is subtle. Both pro and anti net-neutrality debates <a href="">tend to involve innovation, but from different angles</a>. Internet providers are usually anti-neutrality with the claim that a neutral Internet stops them from charging more to move data, which would (in theory) prevent them from investing more into their networks. Large companies such as Google tend to be for net neutrality, because a non-neutral Internet would likely cost them more. The likes of Google, Facebook, Netflix and other huge online services grew and for now continue to grow in a neutral Internet. However, so do Internet providers, which <a href="">make far above what’s considered a “good” profit margin for a business</a>.</p> <p>Internet providers have already <a href="">admitted that limits such as data caps have nothing to do with their network capacity</a>. It is true that as high-bandwidth services like Netflix, Skype, and other streaming services are used by more people, more network capacity is required. However, the amount of data that is zero-rated - that is, it can be streamed without using up a data cap - makes it clear that capacity is not a huge concern. Companies like AT&amp;T have rolled out streaming services for live TV quickly and without capacity problems (though not without other problems) on more than one occasion. Further, companies like Verizon are <a href=";mc_eid=ac30b8596a">pushing landline customers to wireless rather than repairing copper cables</a>, which speaks to how much spare capacity they seem to have available (<a href="">certain wireless calls are routed over the Internet on some networks, including Verizon and AT&amp;T</a>).</p> <p>Looking at where money given to Internet providers has gone in the past makes the idea that ISPs would reinvest larger profits back into their services questionable. Most ISPs are public companies who answer to their shareholders, who have financial stakes in their success. Most likely, at least a large part of an influx of funding would be distributed to shareholders. An influx of cash to Internet Service Providers has happened before <a href="">where the government gave grants for network upgrades, and little to none of the funding actually went towards network improvements</a> - though company executives had <a href="">larger than usual bonuses for the years following</a>. The idea that ISPs are in any sort of need of extra funding is questionable in and of itself due to the profit margins we’ve seen through their history. <a href="">Most of the cost of running a network is in the geographic installation, rather than the day-to-day maintenance</a>. Not to mention, <a href="">ISP spending has actually fallen since 2007 (to 2014) per percentage of revenue</a>.</p> <p>Money that online services need to pay to ISPs for delivering their content is money that online services are unable to invest back into themselves. What this means for consumers is either higher costs for the same service, if online services choose to pass those costs on to their customers, or fewer improvements and fewer new features in the services they use. In a world where Internet service providers charged online services for bandwidth - which in reality, online services are already paying for, just not on the consumer ISP end - it’s far more expensive to build an online service. In the end, this is bad for consumers because it limits the competition that can develop when it comes to high-bandwidth services. Internet providers have used the argument that by charging high-bandwidth online services more for transporting their data, they can charge their customers less. In reality, customers are more likely to end up paying the same, if not more for their Internet when factoring in the cost of online services, even though their Internet connection could cost less.</p> <p>The fact is, the vast majority of companies are not Internet service providers. While Internet service providers need to continue to invest in their networks in order to deliver content to their customers, there’s too great a risk in allowing them to become the gatekeepers to the Internet. The shipping company analogy - where service providers should be able to charge companies more for moving their data and companies will pay for it if they need it - breaks down here. Access to goods and services is important, but waiting an extra week for an online purchase is nothing more than an annoyance (and where it is, there is a local option). Access to information needs to be equal.</p> Wed, 08 Feb 2017 00:00:00 -0500 Net Neutrality and Your Voice <p>Net neutrality aims to protect the many voices that compose the Internet from censorship. The web has made it more possible than ever before to be informed and to be heard. More than half of Americans use the Internet as their primary source of information. <a href="">An average of over 1.2 billion people per day shared their voices on Facebook</a> in March 2017. Unfortunately, access to the Internet is provided by an industry with a near monopoly on providing service, <a href="">that claims its own rights to free speech</a> when it comes to deciding what their customers have access to. If the ISP industry has its say, the world of free speech that exists online could be divided and censored based on what the industry determines people are willing to pay for.</p> <p>Service providers have in the past claimed—and some still claim—that <a href="">control over what their customers see is their First Amendment right</a>. Specifically, in a brief filed with the U.S. Court of Appeals, one of the Internet providers that still makes this claim says that net neutrality prevents them from favoring their own websites over other websites to send their own message. While a data provider claiming that it should have such control over what people can see is worrisome, the statement is true. Net neutrality requires Internet providers to deliver all websites at the same speed, the same way. The importance of that is underscored by the fact that large ISPs now often own their own media outlets. By throttling competing sites, encouraging their own services over others with zero-rating, or by outright blocking sites, ISPs control who can be heard and what can be known. As more communication moves to the digital world, ensuring that the same ideas that can be expressed in the physical world can be expressed without suppression on the Internet is extremely important.</p> <p>If a site lacks the funding to pay for their service to be part of the open Internet or an Internet fast lane, that service may be doomed to fail even if it isn’t outright blocked. Zero-rating can increase the use of the data-exempt service substantially. Research from Microsoft <a href="">suggests that slowing down a website by 250ms (the blink of an eye) makes users more likely to use a competitor</a>. What that means for the owner of a small business or a personal website who can’t pay to be in one of the so-called “fast lanes” or sponsored data plans is much more difficulty in keeping visitors. Large companies could afford to pay the fees to for carriage in those fast lanes which right away makes their voice stronger than those not in a fast lane. Fast lanes of a form already exist. Companies such as Netflix <a href="">pay for access to Internet fast lanes</a> by putting their servers directly on Internet providers’ networks.</p> <p>On a more individual level, <a href="">a non-neutral Internet takes away your right to choose what voices you listen to</a>. Internet service providers are for-profit companies whose goal is to maximize their already astronomical profits. By discouraging people from accessing sites—through the use of data caps, throttling, or zero-rating, service providers take control of what people are able to see. The vast majority of Internet subscribers in the U.S. have access to only one or two Internet providers. With so few options, there’s no easy way to vote with your wallet or choose a different ISP that offers you more access. Without competition, there’s little to stop ISPs from offering limited service at high prices.</p> <p>Through net neutrality rules and regulation, we can keep the Internet an even playing field where <a href="">all data and views are equally accessible</a>. Internet service providers should not be gatekeepers to being heard online. The fact that an information provider would argue that they should be able to provide an editorialized version of the Internet is concerning, particularly when many of them own media outlets. <a href="">We have had this discussion before with cable TV which due to legislation is not neutral</a>. That means television carriers (cable providers) can drop networks that aren’t willing to pay their fees—<a href="">as recently happened with a state news network in Connecticut</a>—and that independent producers are almost entirely blocked from being shown. Without protections for net neutrality, the same could happen online. If we don’t fight for net neutrality now, we may find ourselves with few choices, limited information, and less ability to share our views. Large companies would rule and startups, individuals, and independent creators would be shut out.</p> Thu, 02 Feb 2017 00:00:00 -0500 The Problems with Tiered Internet <p>We’re generally familiar with how buying cable works. Usually, there’s a collection of packages including an increasing amount of channels with increasing cost. Adding “extras” like HBO cost extra. The Internet does not work the same way. Rather than paying for access to collections of websites, we pay for different speeds, and the entirety of our Internet will be delivered at that speed. The Internet operates very differently, so this sort of access scheme makes sense. A tiered Internet would operate more like cable; you might pay for faster access to a collection of websites, and your cable company might charge companies to allow their services to be available to you at better speeds.</p> <p>In a tiered Internet, you pay twice for the services you use. In a world of Internet fast lanes, some companies could pay Internet providers for their service to be faster than others. In return, they would pass the costs of doing so on to their subscribers. This means that consumers end up paying their service providers as well as paying more for the services they use. Service providers such as AT&amp;T <a href="">have already complained that online services don’t pay them</a>. The problem with this argument is that the end user buying Internet service is paying for that data already. Internet providers have already explained that data caps are not related to congestion or network limitations, so the idea that an online service would need to pay an Internet provider to provide access to their service is entirely profit driven. Further, <a href="">the cost for moving data across the Internet is very low - less than a penny per gigabyte</a>. Online services already pay for bandwidth anyway - just not to a consumer service provider.</p> <p>Tiered Internet plans divide the Internet into two Internets; an elite, unlimited and privileged Internet, and a slower, limited Internet. <a href="">With a majority of people using the Internet</a>, this creates a split in how well people are able to be informed about the world. Already (as of 2013) <a href="">26 million Americans are not able to afford access to the Internet</a>, let alone being able to afford access to a top-tier open Internet. It’s unlikely that a service provider would outright block access to websites. However, by slowing them down or introducing caps on data outside an Internet package, service providers can effectively discourage people from visiting them. There’s a lot more on the Internet than many people realize - everything from news, to source code, to online courses - parts of which many people may not use. The problem is, by discouraging use of those less common places, it becomes harder to get a view of what’s going on a wider scale.</p> <p><a href="">The technology exists</a> (and is widely available - even your home router likely supports it) to allow certain types of data to move faster than others. Typically, this would be used to allow VoIP (Internet telephone, basically) to operate faster than say, loading Facebook so loading Facebook doesn’t cause call drops. Without net neutrality, <a href="">service providers can artificially limit speeds based on the Internet packages their customers subscribe to</a>. This could mean that someone would need to pay extra for their gaming to be faster, or for the ability to work from home, or even just to have Skype work reasonably well. While service providers likely wouldn’t block services altogether, a tiered Internet could be built around that sort of traffic shaping. There isn’t a technological reason for Internet providers to do this, <a href="">given their own admissions that network capacity is not a problem</a>. Unfortunately, there’s also no way to escape it as a customer because there are so few Internet options available. <a href=""> Net neutrality rules prevent this from happening</a>.</p> <p>What makes the Internet so powerful and so useful is that once you pay for access, you have access to everything and at the same speeds. No matter what, it’s possible to access any information at any time. On a tiered Internet, that isn’t so. Tiered Internet packages create an elite Internet of people who can afford to access the whole Internet at the same speed, and who therefore have more reliable access to information. It also puts Internet providers in control of what users of the lower tier, less expensive Internet can see. That becomes a huge problem for living in an informed democracy.</p> Wed, 25 Jan 2017 00:00:00 -0500 Why Zero Rating Actually Sucks <p>Everyone loves zero-rating, <a href="">which is when certain services don’t count against your data limit</a>. Depending on your provider, anything from NFL games to Pokémon Go might be free to use without using up your data. There are two ways carriers do this: by having the company behind the zero-rated service pay for the data use, or by simply not counting it for promotional reasons. For us as users, it seems pretty good. Unfortunately, zero-rating brings a number of serious problems that hurt users in the longer term. It hurts competition between online services, limits and disincentivizes users from freely accessing the Internet, and costs more. It turns out that as great as it seems, the problems zero-rating has are big.</p> <p>Zero-rating can be anti-competitive because it gives large advantages to the services that zero-rate their own apps. This can be particularly problematic when service providers zero-rate their in-house services, especially where there are few choices. <a href="">Zero-rating a service, even for a short time, has been shown to cause huge spikes in the usage of that service</a>. That means that where services are zero-rated, people are more likely to use the zero-rated services over other alternatives. It’s understandable because data plans tend to be expensive. Unfortunately, that makes it much more difficult for competition to grow, so there’s little reason for the zero-rated services to offer better prices (or service). A new competitor would need to make the case for why someone should use their limited data on them, versus using the other service that doesn’t count towards their data. In the end, the user loses to potentially worse service and higher prices.</p> <p>Services that are zero-rated because of agreements with a carrier usually pay for data used to access their service on behalf of their users. This means that if you have a data plan that allows you to stream sports for free and you stream NFL games, for example, the NFL may need to pay for the data you’re using. Carriers and service providers still want the data paid for so if you’re not paying for it, <a href="">then the service you’re using data-free is</a>. In response, those services still need to make their own money, which could mean raising their own prices in response. This means that users can end up paying more - first for the data subscription package with zero-rating, and then again for the data they’re (invisibly) paying for via the subscription fees for the zero-rated service. Service providers <a href="">explain that they don’t “double-dip”</a>, which is likely true, but <a href="">that doesn’t mean it isn’t a good deal for them</a>.</p> <p>Due to zero-rating, users can be artificially limited to a small portion of the Internet, and are discouraged from using anything outside of that because it would cost them data. This has been used as one of the biggest arguments against Facebook Free Basics, <a href="">which provides free Internet access (limited to Facebook and some other sites Facebook chooses) to people who couldn’t otherwise afford Internet access</a>. This effectively makes Facebook the gatekeeper to the Internet - only allowing Free Basics users to see what Facebook wants them to be able to see. On a larger scale - not just with Free Basics but with all Internet providers - this means not everyone can hear everyone, or make themselves heard. An open, neutral and non zero-rated Internet makes it possible for anyone to publish a website or an app. With zero-rating on a large-scale, that is no longer the case because users are less likely to visit new places that will count against their data. Service providers already zero-rate their own services, which can include news networks, which means users can be less likely to seek out a wider world view.</p> <p>What’s nefarious about zero-rating is how hard it is to get past the “free stuff” part of it. Encouraging people to use more of (or only) certain services by zero-rating them is bad for consumers and in the end, even costs them more. Some countries <a href="">have even banned services like Facebook Free Basics</a> because of their zero-rating aspects. Even in the U.S., the FCC <a href="">has taken notice of zero-rating</a> practices. Without rules protecting a non zero-rated Internet, the future of the Internet (and our ability to be informed) might be sold to us as a collection of services that don’t use up our data.</p> Mon, 16 Jan 2017 00:00:00 -0500 How Data Caps Hurt the Internet <p>Data Caps (also called Bandwidth Caps) <a href="">are a limit on how much you can upload and download through your Internet provider</a>, often on a monthly basis. Most of us are familiar with data caps from cell phone data plans, although data caps on home Internet also exist. Usually, they’re sold as a quality and fairness measure, so that no one person can hog the service provider’s network. Whether or not that’s widely believed is questionable, since customer protest usually causes ISPs to roll out data caps as a voluntary way to reduce bills or as “tests” in a certain area. In modern networks, data caps are not necessary to ensure quality of service for everyone on the network and <a href="">actually have little to nothing to do with actual usage</a>. It turns out that data caps are more profit driven and actually influence consumers to spend more for their Internet than they need to.</p> <p><a href="">One thing that data caps are effective at is lowering data usage</a>. Generally, people are good at keeping track of how much data they use and managing it over the course of a month, and use less towards the beginning of the month. Towards the end of the month, they’ll use more in order to get all the data they feel they’re paying for. This isn’t terribly surprising since intuitively most people want to make sure they’re not spending more data than they have, but then don’t want to “lose” the rest of it. Data caps do sometimes come with higher speeds, which is a great way to sell them since they are otherwise extremely unpopular. At a glance, this can sound like a good thing, but cable companies have <a href="ttps://">admitted that caps have nothing to do with network congestion</a>, which would be the suspect cause for lower speeds.</p> <p>More recently, as streaming services such as Netflix and online gaming have become more popular, data caps tend to limit choices. Most cable companies provide their own on-demand services for access to movies and TV shows. In some cases, they even offer their own online gaming platforms. With both, usage usually does not count against data caps. This means that users who require data for different things - working from home, for example - are less able to make use of other services. Caps tie users to their provider’s services more - which can become a problem since many providers also own their own news outlets which they can easily decide not to count against your data. With bias in news a real issue and data caps that limit the amount of other information users can access, that spells bad news for staying informed. If nothing else, limiting choices can also limit competition, as it’s difficult to start another streaming service or news network when accessing it is too expensive to people with limited data.</p> <p>It turns out that data caps actually make Internet access more expensive. People are so afraid of going over their caps and getting charged overage fees (or having their access cut off) that <a href="">they often buy more expensive Internet service than they need for higher caps</a>. Few people are fully aware of how much data they use over the course of a month for their home Internet, so when data caps are introduced, they pay more for data they don’t need. What’s more, is that people on plans that did not limit data <a href="">paid almost 80% less per gigabyte of data</a>, which is a huge difference. That statistic alone shows that claims that metered (limited) plans are less expensive are not true.</p> <p>Data caps are not about fairness or improving network technologies, as has been claimed in the past. Not only have service providers <a href="">admitted they have nothing to do with congestion</a>, they’ve even <a href="">told their call centers to stop claiming that it does</a>. If data caps were about fair pricing and paying only for what you use, then there would be lower level packages - there isn’t anything fair about someone who only needs Internet to check their email once a day paying $50/month for access to do so.</p> Tue, 10 Jan 2017 00:00:00 -0500 Net Neutrality and Access to Information <p>The Internet is a major source of information, providing digital outlets for nearly any information. Almost 40% of Americans get their news online, according to <a href="">a Pew Research study in 2016</a>, making the Internet a news source second only to television. In addition, a majority of adults happened to <a href="">get news from social media</a> - which is a problem because social media provides only a curated view of the news. These statistics underscore the need for open access to news and information online. Limiting access to information - fake news is a different issue - makes it much more difficult to stay informed. Easy access to information is empowering.</p> <p>Many news sources tend to have a political bias, to the point where <a href="">studies have connected the ideological views of people to their preferred news network</a>. Though people often pay more attention to news they agree with, having all sides of an issue available is very important. Without exposure to other views, it’s much more difficult to find the truth in an issue. As the Internet has taken hold, we’ve enjoyed having easy access to those differing views which helps to provide more accurate insight. <a href="">Having an informed populace is extremely important to our government functioning well</a> and the Internet has the hope of making it easier to be informed than it ever was before.</p> <p>One of the things that net neutrality brings is equal access to every viewpoint. We know that there are biased media outlets, and we also know that people watch them. This isn’t necessarily a problem because no matter what, it is possible to fact check and compare views. However, <a href="">many Internet service providers own or are owned by those same media companies</a>. Comcast, for example, owns NBC and MSNBC, which in the past have been accused of leaning left. Media outlets get advertising revenue from their services, so they are incentivized to promote their services over others.</p> <p>Service providers already prioritize certain services over others through <a href="">Zero Rating</a>. Zero Rating is when certain services don’t count towards data caps - such as T-Mobile allowing services such as Spotify to be used without using your data, or <a href="">Comcast allowing you to stream Comcast’s services without hitting your data cap</a>. Zero Rating is technically not allowed by net neutrality rules because it could be applied to anything, including news outlets. What this means is your Internet provider can encourage you to use their own media outlets, or ones they agree with. In a world of zero rating, it’s hard to argue that if you pay for access to the Internet through a company, you shouldn’t also have access to their own services - but many service providers also have their own news networks.</p> <p>Even if service providers don’t go so far as to outright block things, they are able to slow them down. It turns out that it doesn’t take much to deter people from visiting a website. <a href="">People are unwilling to wait for a page to load</a>; a quarter of people visiting a page that takes 4 seconds to load will leave the page instead of waiting. Worse, there have already been issues with service providers slowing down certain services. In 2014, <a href="">Netflix found that Comcast was slowing down their streaming</a>. Comcast claimed that streaming Netflix videos took too much bandwidth; however, Netflix is far from the only service that streams video.</p> <p>Research has shown that adults who have reliable access to the Internet <a href="">are more likely to try to learn more about their world</a>. Access to a wide variety of viewpoints is important for people to make informed and empowered choices. With the Internet becoming a standard tool for accessing those viewpoints, it’s vital to make sure that Internet providers don’t become gatekeepers for information.</p> Wed, 04 Jan 2017 00:00:00 -0500 Why Internet Providers Don't Compete <p>In the United States, <a href="">most people have access to only one or two</a> Internet Service Providers. <a href="">Only 28% had access to three or more for speeds one might consider tolerable, and 9% for speeds one might consider “fast” as of 2014</a>. Since that data was collected, some providers have merged with others so there are fewer options available. Mobile Internet is better, as most people have access to more than three providers for standard speeds.</p> <p>The lack of options isn’t surprising. Unlike most other industries, <a href="">building an Internet service provider (ISP) is prohibitively difficult</a>. It requires large, expensive installations of equipment, and requires buying Internet service (to basically re-sell) from an existing service provider which could be a competitor. In addition, the initial costs are so high that a new service provider is unlikely to make money for several years after construction. <a href="">New neighborhoods are generally built with only one cable provider in mind</a> as well, which removes competition from the start. It isn’t impossible to create a new ISP, but it is far too difficult and expensive for most people to do it. Even Google is getting out of the fiber business after entering a few cities. Consider how difficult it would be to start a new electric company; starting a new ISP is similar.</p> <p>In addition to the technical problems with adding competition to the ISP market, larger ISPs actively avoid competing. In 2016, Charter agreed to FCC rules intended to increase competition in order to buy Time Warner Cable and Brighthouse, <a href="">then sued the FCC to overturn those rules</a>. Also in 2016, Charter explained that they don’t compete with other cable companies <a href="">because it would make it impossible to buy them</a>. In 2008, <a href="">Comcast even sued a city to block them from building their own local ISP</a>. In an industry that is already extremely difficult to get started in, these practices make it almost impossible to start another option.</p> <p>When service providers are forced to compete, their prices often drop substantially. When Google Fiber announced they would offer Internet service in Tennessee, for example, Comcast and AT&amp;T suddenly <a href="">began offering substantially lower prices and more products</a>. AT&amp;T cut prices for some of their products by as much as 40%. In Charlotte, Time Warner Cable <a href="">made their products 6x faster</a> when Google Fiber was expected to become available. Until there is competition, providers can offer any prices they want because there is no cheaper option. With how quickly ISPs are able to offer lower prices and better service when a competitor arrives, it’s clear that it’s possible to do better. Unfortunately, there’s no competitive push for improvements because competition is so rare.</p> Fri, 23 Dec 2016 00:00:00 -0500 Explaining Net Neutrality <p>Net Neutrality is the idea that all data on the Internet, regardless of who it comes from, its political affiliation, company, or who’s consuming it, should be treated the same way and should move at the same speed. We’ve enjoyed a mostly-neutral Internet for some time now, which is what has allowed the Internet to become an important means for moving information around the world. A neutral Internet is also the reason it’s easy for new companies to get up and running online and find their customers. It’s difficult to explain well in a short blog post the importance of a neutral Internet, so be sure to take a look at the links throughout for more information.</p> <p>At a glance, it can be difficult to understand why Net Neutrality matters, in particular because we’ve never seen a non-neutral Internet. A non-neutral Internet can even seem appealing from the marketing suggestions that service providers publish, in that you can pay only for what you need. Unfortunately, service providers themselves are far from neutral. If you like, or take issue with channels such as NBC, CNBC, MSNBC, FOX, or other major media outlets, you should be aware that they either own, or are owned by, service providers. Let’s take a look at a few commonly known ones:</p> <ul> <li>NBC, CNBC, MSNBC - Owned by Comcast <a href="">(From Freepress)</a></li> <li>CNN - Owned by Time Warner, which was recently bought by AT&amp;T <a href="">(From NPR)</a></li> <li>Newsday, AMC - Owned by Altice (Formerly, Cablevision) <a href="">(From Wikipedia)</a></li> <li>FOX, Dow Jones - Owned by NewsCorp, which has large stakes in several Internet service providers <a href="">(From Wikipedia)</a></li> </ul> <p>This is far from an exhaustive list; the intent was to highlight the fact that major media companies are not neutral. Considering the (generally) limited number of cable and Internet service providers in most areas, it’s not terribly unlikely that your Internet and cable is provided by a company with opinions that differ from yours. Further, a number of online and even print publications are owned by the same companies. If your cable or Internet service provider isn’t in that list, there’s a fair chance that it’s still owned by one of those companies. Still other companies are able to exert control over service providers due to their sheer size; <a href="">companies such as Disney, Viacom, and others you might not even consider such as GE</a>.</p> <p>Net Neutrality protects consumers from the positions that these media companies may hold by requiring them to not only allow you to see other viewpoints from theirs, but to allow you to access them just as easily. It’s reasonable to assume that the United States would protect that fact by law, but we don’t. In fact, in 2014, <a href="">the government agency that normally enforces that was stripped of its power to enforce it through their normal means</a>. Further, the sheer size of these companies, and the fact that they often have regional monopolies on the services they provide, means there is limited competition to fight the problem from a business standpoint.</p> <p>If Net Neutrality sees its end, those companies have the ability to control what you see. Most of us have seen channels (usually temporarily) disappear from our cable lineup because of disputes with companies over money and contracts. We would see more of that, and we would see it carry over to online content. In the 2016 election we saw huge amounts of fake news from both sides; but we had the ability to fact check it. That isn’t the case when the company providing your access limits what you’re able to see.</p> <p>When corporations have the ability to control the information that areas of the country have access to, it breaks democracy. Democracy only works with an informed populace; a populace that can be convinced to be divided and one whose opinions are manipulated is far easier for an authoritarian government to control. Alternatively, it also becomes possible for a very small elite to control elections by manipulating what people see. Many of us use the Internet to stay informed about what’s happening in the world around us and we need to make sure the Internet remains a neutral place to do that. And, if nothing else, a non-neutral Internet allows service providers to charge far more for delivering Internet service, something which has <a href="">a 97% profit margin already</a>.</p> <p><em>For more resources on Net Neutrality, see also:</em></p> <ul> <li><em><a href="">What is Net Neutrality - American Civil Liberties Union</a></em></li> <li><em><a href="">Net Neutrality: What You Need to Know -, Presented By Freepress</a></em></li> <li><em><a href="">Net Neutrality - Wikipedia</a></em></li> <li><em><a href="">What Will a Non-Neutral Internet Really Be Like? - CBS MoneyWatch</a></em></li> <li><em><a href="">What is net neutrality and what does it mean for me? - USA Today</a></em></li> </ul> Mon, 05 Dec 2016 00:00:00 -0500 Encryption is not the enemy <p>Encryption is a well-understood and well-known technology in the world of computing. Though the media would have us believe otherwise, <a href="">encryption is not much more than fairly basic math</a> involving some large, random numbers. There’s a little more to it than that, but it’s based around the fact that modern computers take a really long time to do certain things. That’s not that it’s complicated, just that it’s something computers happen to be fairly bad at. Most things that use encryption use methods that are widely used and known; it’s the keys (or passwords) that are not. No dark magic, and no weird science, just a little math and some keys. If you felt inclined, you could do the tedious job of encrypting something without a computer as long as you had some notes on the math and a calculator.</p> <p>While it isn’t always made clear, encryption is imperative when doing almost anything in the modern world. The <a href="">green lock icon in your address bar means you’re using a website over an encrypted connection</a>. If you have medical records, do banking, or use a credit card, encryption is involved in keeping you safe. All of us are directly or indirectly using some form of encryption in our daily lives often without noticing. Without this encryption, our data would be open for everyone to see and access. Stealing unencrypted information, especially while it’s moving over a network, is incredibly easy to do and requires nearly no “hacking” skills. In fact, there are apps available for “<a href="">network sniffing</a>”, as it’s called, <a href="">for your phone</a> and even <a href="">for your web browser</a>, because it’s a useful tool even in applications that don’t involve stealing data.</p> <p>Unfortunately, as we hear in the news in cases such as the San Bernardino iPhone case, encryption can keep valuable information out of the hands of governments involved in investigations. These cases get used to demonize encryption technologies to push for everything from backdoors (alternate ways of getting access to the information) to outright bans on encryption. These suggestions raise a lot of potential problems.</p> <p>Backdooring encryption is often propositioned as a reasonable “compromise” approach, but it doesn’t work. On the darker side of society, knowledge of how to break into a system and steal data via a backdoor can sell for huge amounts of money. It’s so lucrative, that there are people, companies, and governments all over the world who find and sell backdoors, and there are even people who do it for a living. No backdoor is perfect; for any of them, it’s only a matter of time before they get discovered, leaked, or sold by an unhappy employee to the highest bidder. In just the past month this has happened twice, resulting in the leaks of <a href="">NSA hacking tools</a> and <a href="">Microsoft’s Secure Boot master key</a>. Earlier this year, Juniper Networks announced they had <a href="">found and released a patch for a backdoor</a> possibly placed by the NSA in their systems - which are used by NATO and the U.S. government, among others. There is no such thing as a secure, invisible backdoor and the U.S. is far from the only entity trying to find and exploit backdoors.</p> <p>Like all technologies, encryption gets used from time to time by unsavory people. However, that does not give us a reason to compromise all of our safety in order to see what they’re hiding. The amount of fraud, data theft, and hacking that would result from the loss of secure encryption is far more dangerous. As far back as 2001, <a href="">NIST estimated that the net value of encryption ranged from $345 billion to $1.2 trillion</a> which puts a number to the major implications of breaking encryption (via a ban or backdoor). Encryption is a huge and important part of the modern world, and the calls to require backdoors, bans, or to start “Manhattan Project[s] for Encryption” as we hear officials suggest, are misguided.</p> <p><em>Endnote: I highly recommend reading the resources linked throughout if you’re interested in learning more. They’re highly informative and not overly technical in most cases, and provide a good overview of real-world events and research.</em></p> Sun, 28 Aug 2016 00:00:00 -0400 On Orlando <p>When I woke up yesterday morning, I saw that there had been a mass shooting in Orlando. It’s really telling that when I saw the headline, my reaction was “ugh, again?” and I kept scrolling. We’ve reached the point as a country where my generation considers mass shootings to be so common that we hardly react to the headlines and we forget about them quickly. Earlier this month, there was a murder-suicide at UCLA that we’ve already stopped talking about and others we haven’t even heard about. It’s clear that we need to do more, both legislatively and culturally, to work towards a resolution of these issues. When we politicize human rights and safety, this is the result and will continue to be the result.</p> <p>The attack on Pulse was a direct attack against LGBTQ+ individuals as well as U.S. citizens, Latinos, and Muslims. At least forty-nine innocent people died in the attack and fifty-three were injured, some in critical condition as of last night. Over 100 circles of families and friends are grieving, praying, and worrying; their lives will never be the same. It is nothing short of devastating.</p> <p>Those of us who are members of or allies of the LGBTQ+ community feel as though we’ve lost family and friends, even though most of us likely didn’t know any of the victims personally. Every one of us understands how it feels to come out, to have our first real kiss, and to have the strength to press on through the storms of gay slurs and anti-gay legislation. We’ve struggled, and we continue to struggle, for our love and our true genders to be recognized and accepted. We fight feelings of humiliation, of not being “man” or “woman” enough, nervousness of being who we are, and uneasiness of expressing our love where others can see it. For many attendees of Pulse, it was likely the one of few places where they could be themselves without fear. The victims are our people, our friends, and our family. All of us are grieving.</p> <p>It’s easy to assign blame to somewhere else. And indeed, Daesh has claimed responsibility. But, as Orlando slips below the fold on Facebook’s trending topics, we’ll do nothing but blame, and nothing to prevent the next shooting. We need to counter the culture of violence, racism, xenophobia, homophobia, sexism, etc that has come to light in this round of presidential primaries. Until we do, it’s all of our fault and will happen again, and again, and again in everyone’s communities and no amount of surveillance will prevent it.</p> Mon, 13 Jun 2016 00:00:00 -0400 Explaining Software to Non-Engineers <p>If certain stock images are to be believed, software engineering is equivalent to reading the source code of the Matrix. It’s not tremendously surprising to see it depicted that way. Software is a very abstract concept that is frequently communicated very poorly by the people who build it.</p> <p>As we learn about the world, we develop what are called mental models, <a href="">which are thought processes surrounding how something works</a>. When we learn about something new, we look for ways to apply our existing mental models to it, sometimes incorrectly. A great example of a mis-applied mental model is someone experienced with film cameras equating a digital camera’s memory card to film and replacing it when full. Mental models are usually functional so they don’t work as well when applied to something abstract such as software construction. Most people also don’t have a mental model that can be applied to software so they’re more inclined to use something they know such as the Matrix, hence amusing stock images.</p> <p>Not only is software an abstract concept, it’s surrounded by a lot of jargon. To a fellow software engineer, a statement such as “The API call to the MySQL database throws a null pointer exception” is perfectly understandable. To a non engineer, that sentence likely has absolutely no meaning at best, and at worst makes technology seem far more intimidating than it actually is. It’s okay to just say that something broke. If more information than that is needed, the person is probably at least slightly technical but even saying that “when the program tries to get data, it doesn’t get anything and crashes” conveys information without saying something that <a href="">sounds like television static</a>. Real world analogies can also be helpful since it’s <a href="">easier to fit a mental model to things that can be pictured</a>.</p> <p>I’ve seen a lot of engineers take a “tell me more” or a “how does that work” as a cue to give tons of details and watched the eyes of the person they’re talking to glaze over. A lot of people will smile and nod no matter what is said as soon as the technical details are completely incomprehensible to them, in hopes of seeming like they understand. It can also start an interesting game of Telephone if they try to explain it to someone later. Eventually, we end up with magic and aliens.</p> <p>Most of the time, we don’t need to explain the low levels of how an application works to someone who asked about it but it can be hard to gauge how technical to go. Unless someone asks, it’s better to keep it simple and explain the idea behind it like you’re trying to sell it as something cool. Maybe the messaging framework your microservices use to talk to each other is super cool, but your grandma probably doesn’t need to know about it. In fact, she probably doesn’t even need to know about the microservices, just the end goal that your application fulfills.</p> Wed, 16 Mar 2016 00:00:00 -0400 Blogging with Git and Jekyll <p>With most blog platforms, the process of creating a blog post is pretty similar. It usually looks something like: log into your blog, click the “create post” or equivalent button, fill out the title, write the post, add tags, and click save. Of course, it varies a little bit across platforms because some platforms offer niceties that others don’t. However, other than a few niceties here and there blogging using most platforms is pretty much the same. The workflow for a static site is a departure from that: open a new file in a text editor, write a few pieces of data at the top, write your post, save it, and use your tool to regenerate your site. The simplicity is great because it means you can adjust the process and the tools to make it your own.</p> <p>My website is hosted on Github Pages which means it’s version controlled using Git. Git, and version control in general, keeps track of everything you noted that you changed, effectively allowing you to time travel and “undo” back to the very beginning of your project if need be. In addition, Git has what are referred to as “branches” which allow you to go on a tangent to try something new and either keep it or not depending how it works out. Github provides some tools for making this prettier to use but I usually work out of a console and do things manually.</p> <p>So, let’s tie this back into writing a blog.</p> <p>It’s worth noting that Jekyll itself supports drafts of blog posts. Drafting a post in Jekyll involves creating the draft in a draft folder and later moving it to the folder where your published blog posts live. This works perfectly well, but when Git is involved it feels messy since Git branches provide a different concept of drafting that doesn’t require moving files to their final location.</p> <p>When writing a new blog post, I draft it in place (in my published posts folder) where all of my published posts live. Before I commit (officially log changes) in Git, I create a new branch for it. I can work on multiple drafts on different branches which helps remove the temptation to multitask because when on one branch, drafts on other branches are invisible. I can work on and commit to my draft as much as I want and even push it to Github if I want to share it without publishing it. Once I’m done with a draft, I merge it into the master branch of my website where it goes live when I push to Github.</p> <p>Here’s how this post came into existence:</p> <ol> <li>Create a new branch<br /> <code class="language-plaintext highlighter-rouge">git checkout -b blogging-with-git</code></li> <li>Write the post<br /> <code class="language-plaintext highlighter-rouge">vim blog/_posts/</code></li> <li>Commit the post to Git<br /> <code class="language-plaintext highlighter-rouge">git commit -m "Create blogging with git post" blog</code></li> </ol> <p>And how it got published:</p> <ol> <li>Switch to the master branch of the website<br /> <code class="language-plaintext highlighter-rouge">git checkout master</code></li> <li>Merge the draft in<br /> <code class="language-plaintext highlighter-rouge">git merge blogging-with-git</code></li> <li>Push the changes to Github<br /> <code class="language-plaintext highlighter-rouge">git push</code></li> <li>Github Pages then runs jekyll build and the post goes live!</li> </ol> <p>For me, it’s a great experience. It’s super lightweight so I can do it on any system that has Git installed, it’s very similar to my process of writing software, and it’s really easy to avoid distractions while I write. It’s also flexible which makes adapting it to different needs or habits really easy.</p> Tue, 09 Feb 2016 00:00:00 -0500