{"id":"user/00000000-000-NOT-VALID-a29b6679bb3c/category/global.uncategorized","updated":0,"items":[{"id":"55jQyVFBayOwBJQ5qCX8DsgTPumTnzjw6LozTAKPiWA=_13fe4b6e72f:1243ffd:e95a5f49","originId":"74988 at https://www.eff.org","fingerprint":"c04da56","title":"The EFF Guide to San Diego Comic-Con","published":1373927909000,"crawled":1373931759407,"alternate":[{"href":"https://www.eff.org/deeplinks/2013/07/eff-guide-san-diego-comic-con","type":"text/html"}],"author":"Dave Maass","origin":{"htmlUrl":"https://www.eff.org/rss/updates.xml","streamId":"feed/https://www.eff.org/rss/updates.xml","title":"Deeplinks"},"summary":{"content":"
With the arrival of summer at EFF, you can hear the excitement in the stuffing of luggage and locking of office doors as our team prepares for some of the most important conventions in the world. Black Hat starts on July 27, with DEF CON immediately after. But before those two kick off, there's San Diego Comic-Con, the largest celebration of the popular arts. For the first time, an EFF staffer will be pounding the plush conference carpet (and maneuvering around cosplayers) to take the pulse of the entertainment industry and catch up with some of our friends and fans.\n
After all, the geeks and nerds and fan-kids at Comic-Con are our peeps. In preparation, we threw together this quick guide to the panels that are bound to engage anyone following our issues, whether that's surveillance, free speech, or intellectual property.\n
If you're a creator with a booth at Comic-Con, drop me an email at dm@eff.org, and I'll swing by with some swag (as long as it lasts). If you're doing something EFF-friendly, maybe we'll even feature your work on our Deeplinks blog.\n
A quick note in advance on information safety: It's easy to lapse into the comfort of trust that comes with mingling with 100,000 like-minded folks. Still, you should take some basic precautions. If you're logging onto public Wi-Fi, make sure you use HTTPS Everywhere. Remember also that paying for goods in cash is always safer than swiping your credit card through a mobile device. Finally, when you’re signing up on mailing lists, or trading personal information for goodies, make sure to read the fine print about how that information will be shared.\n
Now for the fun stuff.\n
As the U.S. chases Edward Snowden and his trail of NSA documents around the world, it’s worth contemplating whether truth has finally proved itself stranger than fiction. How can sci-fi even keep up with real world? Two TV shows and a video game have panels that, at least in some way, will touch on the issue.\n
Intelligence: Let's imagine all the spying power of the NSA and more: direct access to the global information grid, WiFi, cell phones, satellites, all channeled through the brain of Sawyer from Lost, or at least Josh Holloway, the actor who played the sly con-man. That’s the premise of Intelligence, a show set to start airing in February 2014 on CBS. Holloway and other members of the cast, and its executive producers, will sit on a panel moderated by the president and editor-in-chief of TV Guide. The booth on the convention floor will also feature a special “experience” with Google Glass.\n
Thursday, July 18, 10am to 11am - Ballroom 20\n
Person of Interest: With the third season starting this call, Person of Interest is about what happens when you combine predictive technology with big-data collected through video surveillance—and the secret operatives who act on the information. There will be a Q&A with the cast on Saturday, but supervising producer Amy Berg will also talk about the realities of the show as part of panel on “science” in “science fiction.” The CBS show this year has also sponsored hotel key cards (see above) that capture the sentiment at EFF these days in the wake of the NSA leaks.\n
The Science of Science Fiction - Thursday July 18, 6:30pm - 7:30pm - Room 24ABC\n
Person of Interest Special Video Presentation and Q&A - Saturday July 20, 4:00pm - 4:45pm - Room 6BCF\n
Watch_Dogs: Does Privacy Exist? In this new, cross-platform video game, the user plays a hacker in Chicago with the power to manipulate the city’s digital infrastructure, including traffic signals and surveillance cameras. So-far unnamed security consultants are joining the lead story designer in a discussion about the technological truths (and stretches of truth) in the video game.\n
Saturday July 20, 6:00pm - 7:00pm - Room 9 \n
\nAt EFF, we love the law like Browncoats love Firefly. So, of course we’re keeping an eyes on the robust legal programming at Comic-Con.\n
Comic Book Law School: Michael Lovitz, author of The Trademark and Copyright Book comic book, is leading a three-part series of panels on the intersection of intellectual property and pop culture. It’s designed for both creators and lawyers, with progressing levels of advancement—101, 202 and 303. (Bonus for attorneys: You’ll earn California MCLE credits.) \n
Thursday, Friday, Saturday, 10:30am - 12:00pm - Room 30CDE\n
Comic Book Legal Defense Fund: CBLDF is a legal-nonprofit that joins us on the front lines of free-speech, standing up for comic-book artists wherever in the world they face oppression. This year, the organization is featuring a panel on manga in Japan and an in-depth, three-part examination of the most important courtroom battles over comics. In another panel, they’ll go over the list of comics banned from public libraries and gear up support for Banned Book Week (Sept. 22-28).\n
Banned Comics - Thursday, July 18, 12:00pm - 1:00pm - Room 30CDE\n
Comics on Trial, Part 1, 2, 3 - Thursday, Friday, Saturday 1:00pm - 2:00pm - Room 30CDE\n
Defending Manga - Saturday, July 20, 2013 12:00pm - 1:00pm - Room 30CDE\n
Banned Comics Jam - Sunday July 21, 12:15pm - 1:45pm - Room 5AB\n
Comics Arts Conference Session #12: Superman on Trial: The Secret History of the Siegel and Shuster Lawsuits: The copyright dispute over Superman dates back more than 60 years, and only just in January received a major decision in the 9th U.S. Circuit Court of Appeals. A biographer and an intellectual-property professor will bring attendees up to speed on the case.\n
Sunday July 21, 10:30am - 11:30am - Room 26AB\n
\nIf you were to take a tour of EFF’s office, you’d immediately notice our love of activist art with the posters lining our halls and office walls. While the following panels don’t directly address digital issues, they do reflect how art can be a powerful tool in affecting change.\n
Comic-Con How-To: Social Practice: Guerilla Art Tactics for Sharing Your Vision with the World\n
Thursday July 18, 3:00pm - 5:00pm - Room 2 \n
Black Mask: Bringing a Punk Rock Sensibility, Activism, and Wu-Tang to Comics\n
Thursday July 18, 8:30pm - 9:30pm - Room 8 \n
Comics Arts Conference Session #7: Heroes/Creators: The Comic Art Creations of Civil Rights Legends\n
Friday July 19, 2013 1:00pm - 2:00pm - Room 26AB\n
Ode to Nerds: EFF fellow and Pioneer Award winner Cory Doctorow (whose young-adult novel Little Brother is San Francisco's One City One Book selection) joins io9.com writer and EFF buddy Charlie Jane Anders in a panel about nerd culture, featuring non-other than Fight Club’s Chuck Palahniuk. They'll be signing autographs at 3:15 p.m. as well in the Comic-Con Sails Pavillion.\n
Thursday, July 18, 1:45pm - 2:45pm - Room 6A\n
Publishing SF/F in the Digital Age: Doctorow, also a champion of DRM-free e-books, will join a panel of authors and booksellers discussing the evolution of fiction in an increasingly digital world.\n
Friday July 19, 7:00pm - 8:00pm - Room 25ABC\n
Science Fiction That Will Change Your Life: Anders and io9 editor and former EFF staffer Annalee Newitz are leading a panel on the most inspiring science fiction of the past year.\n
Friday July 19, 6:45pm - 7:45pm - Room 5AB\n
Adam and Jamie Look Toward the Future: The Mythbusters team have long been supporters of EFF’s work and now’s the time to return the favor. We’ll be in line early to check out the show’s 10 year retrospective discussion and the duos plans moving forward.\n
Hundreds of protesters gathered in San Francisco and thousands more in cities around the United States earlier this month in support of \"Restore the Fourth,\" a grassroots and non-partisan campaign dedicated to defending the Fourth Amendment. The protests took aim in particular at the National Security Agency's unconstitutional dragnet surveillance programs, details of which have emerged in leaked documents over the past month.\n
\"Restore the Fourth\" isn't officially affiliated with any formal organizations, but given our shared goal of ending illegal spying on Americans, EFF had the opportunity to speak to the crowd. Below, you'll find a short video of some highlights from that speech, and the full text as prepared.\n
\nHello everybody, and thank you for coming out here today to stand up for all of our Fourth Amendment rights. At EFF, we've been engaged in lawsuits about these secret and unconstitutional NSA programs for the better part of a decade, and we need the government to see that the American people are outraged.\n
Because nearly 250 years ago today, our founding fathers refused to live under tyranny and declared independence from their ruling government. The king, they wrote in the Declaration of Independence, had made it impossible to live under a rule of law.\n
A few years later they wrote the document that established the United States of America, our Constitution, and with it they published our Bill of Rights to protect the basic rights that every person in this country is entitled to.\n
The actions of the National Security Agency spying on Americans, revealed by a series of whistleblowers driven by conscience, represents a break from that tradition. And it's our duty not to allow that break. It's our duty as human beings, entitled to dignity and privacy, and it's our duty as Americans, protecting those rights not just for ourselves but for everybody who follows in our steps.\n
The Founders wrote the Fourth Amendment deliberately, with a specific purpose: to ensure that so-called \"general warrants\" were illegal. These general warrants were broad and unreasonable dragnets, requiring anyone targeted to forfeit their information to the government.\n
No, under the Fourth Amendment, a warrant needs to be specific. You need a particular target and probable cause.\n
Compare that with what we know the NSA is doing, and has been doing for years. Even now we can't know the full scope of what the government is doing when it claims to act in our names, but we know about these four programs:\n
- The NSA obtains the telephone records of every single customer of phone companies like Verizon. They try to brush that under the rug by saying it's \"just metadata,\" but the invasiveness of that metadata can be truly astonishing: every single call, who is on both ends, how long they spoke, and more.\n
- The NSA in some cases obtains the actual content of phone calls, effectively listening in on private conversations.\n
- The NSA taps the very basic infrastructure of our net, sucking up the raw data and storing it for who knows how long, doing who knows what kind of analysis to it.\n
- The NSA obtains content from major tech companies, many of whom are based right here in this city, including videos, email messages and more, based on a 51% chance—a guess, basically—that the \"target\" of the investigation is a foreigner talking to somebody in the US.\n
Make no mistake: these programs are illegal. These are illegal under the Fourth Amendment, which we celebrate here today, and they are propped up only by outrageous and dishonest readings of laws that violate Congress' intentions.\n
And when the Director of National Intelligence was asked about these illegal programs on the floor of Congress, he flat-out lied about them, to Congress and to the American people. Once again, this government is acting in our name, but it refuses even basic accountability for its actions.\n
You don't lie to Congress to hide programs if you believe that they're legal. That's why we're demanding a few steps to once and for all shine some light on the activities of the NSA.\n
We want a full, independent, public Congressional investigation into what the NSA is doing, and what it claims it's allowed to do.\n
We want the public to see the secret legal decisions, made in a secret court, about what kind of surveillance the government's doing.\n
We want the public to see the Inspector General Reports about these programs.\n
We want to see how the government justifies these programs—that means any other reports about how necessary and effective they are.\n
And most importantly, we want public courts to determine the legality of these programs.\n
We think public courts will see through the NSA's torturing of the English language, and see that it's searching all of us, and seizing our data, in ways that are absolutely unreasonable.\n
Once the courts have reviewed these programs, we want the dragnet surveillance to stop. No more bulk collection of Americans' communication records, and no more open access to the backbone of the Internet.\n
In 1975, after widespread illegal activity by the NSA, the FBI, and the CIA, Senator Frank Church chaired a committee to examine that bad behavior and reign it in with new laws. It wasn't an easy road, but the Church Committee was able to establish some of the first real safeguards against these sorts of illegal activities. Frank Church was clear about why these were necessary:\n
If these agencies were to turn on the American people, Church said, \"no American would have any privacy left, such is the capability to monitor everything: telephone conversations, telegrams, it doesn’t matter. There would be no place to hide.\"\n
Those are powerful words. But it's up to us to ensure that those words are a battle cry in the fight for our privacy, and not the epitaph on its grave.\n
The Fourth Amendment is there to protect us, but there comes a time when we have to step in and protect it. The NSA has treated the Fourth Amendment and the rest of the US Constitution like a suggestion they're free to ignore. Today, we stand up, and we let them know: that is unacceptable. We, the American people, will not let ourselves fall under tyranny, and we will not let government agencies establish the infrastructure for turnkey totalitarianism.\n
We will push back, we will fight, and we will do whatever it takes to restore the Fourth Amendment and the rest of the U.S. Constitution. Thank you all for coming out today to send that message.
Each year, EFF’s Who Has Your Back campaign assesses the policies and practices of major Internet companies as a way to encourage and incentivize those companies to take a stand for their users in the face of government demands for data. Normally, when a company demonstrates it has a policy or practice that advances user privacy, like fighting for its users in courts, we award the company a gold star. Sometimes, even when companies stand up for their users, they're forbidden from telling us about it because of unduly restrictive secrecy laws or court orders prohibiting them from doing so.
Which, for the past six years, is exactly what happened to Yahoo. In honor and appreciation of the company’s silent and thankless battle for user privacy in the Foreign Intelligence Surveillance Court (FISC), EFF is proud to award Yahoo with a star of special distinction in our Who Has Your Back survey for fighting for its users in (secret) courts.
In 2007, Yahoo received an order to produce user data under the Protect America Act (the predecessor statute to the FISA Amendments Act, the law on which the NSA’s recently disclosed Prism program relies). Instead of blindly accepting the government’s constitutionally questionable order, Yahoo fought back. The company challenged the legality of the order in the FISC, the secret surveillance court that grants government applications for surveillance. And when the order was upheld by the FISC, Yahoo didn’t stop fighting: it appealed the decision to the Foreign Intelligence Surveillance Court of Review, a three-judge appellate court established to review decisions of the FISC.
Ultimately, the Court of Review ruled against Yahoo, upholding the constitutionality of the Protect America Act and ordering Yahoo to turn over the user data the government requested. The details of the data turned over, and even the full opinion of the Court of Review, remain secret (a redacted version of the court’s opinion was released in 2008). Indeed, the fact that Yahoo was involved in the case was a secret until the New York Times revealed it earlier this month. Following the Times article and a new motion for disclosure by Yahoo, the government acknowledged that more information could be made available about the case, including the fact that Yahoo was involved.
After six years of silence, Yahoo is finally able to speak publicly about its fight.
Yahoo went to bat for its users – not because it had to, and not because of a possible PR benefit – but because it was the right move for its users and the company. It’s precisely this type of fight – a secret fight for user privacy – that should serve as the gold standard for companies, and such a fight must be commended. While Yahoo still has a way to go in the other Who Has Your Back categories (and they remain the last major email carrier not using HTTPS encryption by default), Yahoo leads the pack in fighting for its users under seal and in secret.
Of course, it's possible more companies have challeneged this secret surveillance, but we just don't know about it yet. We encourage every company that has opposed a FISA order or directive to move to unseal their oppositions so the public will have a better understanding of how they've fought for their users.
Until then, we hope Yahoo's star will serve as a beacon for all companies: fighting for your users' privacy is the right thing to do, even if you can't let them know.
\n
In the past two weeks Congress has introduced a slew of bills responding to the Guardian's publication of a top secret court order using Section 215 of the PATRIOT Act to demand that Verizon Business Network Services give the National Security Agency (NSA) a record of every customer's call history for three months. The order was confirmed by officials like President Obama and Senator Feinstein, who said it was a \"routine\" 90 day reauthorization of a program started in 2007.\n
Currently, four bills have been introduced to fix the problem: one by Senator Leahy, Senator Sanders, Senators Udall and Wyden, and Rep. Conyers. The well-intentioned bills try to address the Justice Department's (DOJ) abusive interpretations of Section 215 (more formally, 50 USC § 1861) apparently approved by the reclusive Foreign Intelligence Surveillance Court (FISA Court) in secret legal opinions.\n
Sadly, all of them fail to fix the problem of unconstitutional domestic spying—not only because they ignore the PRISM program, which uses Section 702 of the Foreign Intelligence Surveillance Act (FISA) and collects Americans' emails and phone calls—but because the legislators simply don't have key information about how the government interprets and uses the statute. Congress must find out more about the programs before it can propose fixes. That's why a coalition of over 100 civil liberties groups and over half a million people are pushing for a special congressional investigatory committee, more transparency, and more accountability.\n
The American public has not seen the secret law and legal opinions supposedly justifying the unconstitutional NSA spying. Just this week the New York Times and Wall Street Journal (paywall) reported that the secret law includes dozens of opinions—some of which are hundreds of pages long—gutting the Fourth Amendment. The special investigative committee must find out necessary information about the programs and about the opinions. Or, at the very least, extant committees like the Judiciary or Oversight Committees must conduct more open hearings and release more information to the public. Either way, the process must start with the publication of the secret legal opinions of the FISA Court, and the opinions drafted by the Department of Justice's Office of Legal Counsel (OLC).\n
Some of the bills try to narrow Section 215 by heightening the legal standard for the government to access information. Currently, the FBI can obtain \"any tangible thing\"—including, surprisingly, intangible business records about Americans—that is \"relevant\"\n
\nto an authorized investigation to obtain foreign intelligence information not concerning a US person or to protect against international terrorism or clandestine intelligence activities
with a statement of facts showing that there are \"reasonable grounds to believe\" that the tangible things are \"relevant\" to such an investigation. Bills by Rep. Conyers and Sen. Sanders attempt to heighten the standard by using pre-9/11 language mandating \"specific and articulable facts\" about why the FBI needs the records. Rep. Conyers goes one step further than Sen. Sanders by forcing the FBI to include why the records are \"material,\" or significantly relevant, to an investigation.\n
By heightening the legal standard, the legislators intend for the FBI to show exactly why a mass database of calling records is relevant to an investigation. But it's impossible to know if these fixes will stop the unconstitutional spying without knowing how the government defines key terms in the bills. The bills by Sen. Leahy and Sens. Udall and Wyden do not touch this part of the law.\n
Sens. Udall, Wyden, and Leahy use a different approach; their bills mandate every order include why the records \"pertain to\" an individual or are \"relevant to\" an investigation. Collectively this aims—but most likely fails—to stop the government from issuing \"bulk records orders\" like the Verizon order. Senator Sanders travels a different path by requiring the government specify why \"each of\" the business records is related to an investigation; however, it's also unclear if this stops the spying. Yet again, Rep. Conyers bill provides the strongest language as it deletes ambiguous clauses and forces all requests \"pertain only to\" an individual; however even the strongest language found in these bills will probably not stop the unconstitutional spying.\n
Unfortunately, legislators are trying to edit the statutory text before a thorough understanding of how the government is using key definitions in the bill or how the FISA Court is interpreting the statute. For instance, take the word \"relevant.\" The \"tangible thing\" produced under a Section 215 order must be \"relevant\" to the specific type of investigation mentioned above. But the Verizon order requires every Verizon customer's call history.\n
The New York Times confirmed the secret FISA court was persuaded by the government that this information is somehow relevant to such an investigation. The Wall Street Journal (paywall), quoting \"people familiar with the [FISA Court] rulings\" wrote: \"According to the [FISA Court], the special nature of national-security and terrorism-prevention cases means 'relevant' can have a broader meaning for those investigations.\" Obviously, only severely strained legalese—similar to the Department of Justice's re-definition of \"imminent\"—could justify such an argument. And the Fourth Amendment was created to protect against this exact thing—vague, overbroad \"general warrants\" (.pdf).\n
If \"relevant\" has been defined to permit bulk data collection, requiring more or better facts about why is unlikely to matter. Even Sen. Sanders's approach—which would require \"each\" record be related to an investigation—could fall short if \"relevance\" is evaluated in terms of the database as a whole, rather than its individual records. This is just one example of why the secret FISA Court decisions and OLC opinions must be released. Without them, legislators cannot perform one of their jobs: writing legislation.\n
The actions revealed by the government strike at the very core of our Constitution. Further, the majority of Congress is unaware about the specific language and legal interpretations used to justify the spying. Without this information, Congress can only legislate in the dark. It's time for Congress to investigate these matters to the fullest extent possible. American privacy should not be held hostage by secrecy. Tell Congress now to push for an special investigative committee, more transparency, and more accountability.\n
A growing number of independent game developers have received demand letters from Treehouse Avatar Technologies for allegedly violating patent 8,180,858, a \"Method and system for presenting data over a network based on network user choices and collecting real-time data related to said choices.\" Essentially, this patent covers creating a character online, and having the game log how many times a particular character trait was chosen.\n
In other words, an unbelievably basic data analytics method was somehow approved to become a patent.\n
The patent troll, Treehouse, has surfaced before. Back in October 2012, the company sued Turbine, developer of Dungeons and Dragons Online and Lord of the Rings Online.\n
This is a textbook patent troll case. Treehouse owns a very broad software patent but doesn't, it seems, make or manufacture anything itself. They simply send demands around or, in some cases, sue alleged infringers. And developers—most recently, independent game developers—lose out by being subject to lawyer fees, licensing fees, litigation costs, or the fear of implementing what seems to be a very basic, obvious feature to their product.\n
When trolls attack, innovation is stifled. For more on everything EFF is doing to change this reality, visit our patent issue page.\n
Over July 4th, thousands of people in cities across the United States rallied in defense of the Fourth Amendment.\n
Tomorrow, Restore the Fourth – the grassroots, nonpartisan movement supporting the Fourth Amendment and opposing NSA spying – is taking the battle to the phones. A number of Restore the Fourth chapters will be hosting a “Restore the Phone” event. They will be encouraging concerned citizens to call their members of Congress and demand transparency and reform of America’s domestic spying practices.\n
According to their blog post, Restore the Fourth intends to use Friday to draw the attention of Congress. They provide a suggested script (Google doc) for callers which includes strong language against the NSA spying program:\n
\nThis type of blanket data collection by the government erodes essential and constitutionally protected American values. Furthermore, the body of secret surveillance law that has developed in an attempt to justify this type of domestic surveillance is antithetical to democracy. The NSA’s domestic spying program is not the American way.
We think that phone calls are among the most effective ways to make Washington hear the concerns of constituents. We’re proud to support this initiative, and urge our friends and members to join the call in day.\n
Here are two ways you can speak out (note, if you are outside of the United States you should go here to take our international alert).\n
And when you’ve finished calling Congress, remember to spread the word on social media and add your name to the Stop Watching Us campaign.\n
Important notes about your privacy: we’ve required that the automated tools above promise to protect your privacy by insisting that your phone number be used for this campaign and nothing else unless you request additional contact. If you don’t want your information processed by the automatic calling tools, use the EFF page to get a phone number and call directly. Learn more by visiting the privacy policies of Fight for the Future and Twilio.\n
\n
Since the Guardian and Washington Post started published secret NSA documents a month ago, the press has finally started digging into the operations of ultra-secretive Foreign Intelligence Surveillance Act (FISA) court, which is partly responsible for the veneer of legality painted onto the NSA’s domestic surveillance programs. The new reports are quite disturbing to anyone who cares about the Fourth Amendment, and they only underscore the need for major reform.\n
As the New York Times reported on its front page on Sunday, “In more than a dozen classified rulings, the nation’s surveillance court has created a secret body of law giving the National Security Agency the power to amass vast collections of data on Americans.” The court, which was originally set up to just approve or deny wiretap requests now “has taken on a much more expansive role by regularly assessing broad constitutional questions and establishing important judicial precedents,” with no opposing counsel to offer counter arguments to the government, and rulings that cannot be appealed outside its secret structure. “It has quietly become almost a parallel Supreme Court,” reported the Times.\n
The Wall Street Journal reported on one of the court’s most controversial decisions (or at least one of the controversial decisions we know of), in which it radically re-interpreted the word “relevant” in Section 215 of the Patriot Act to allow for the dragnet collection of every phone call record in the United States.\n
The Journal explained:\n
\nThe history of the word \"relevant\" is key to understanding that passage. The Supreme Court in 1991 said things are \"relevant\" if there is a \"reasonable possibility\" that they will produce information related to the subject of the investigation. In criminal cases, courts previously have found that very large sets of information didn't meet the relevance standard because significant portions—innocent people's information—wouldn't be pertinent.\n
But the Foreign Intelligence Surveillance Court, FISC, has developed separate precedents, centered on the idea that investigations to prevent national-security threats are different from ordinary criminal cases. The court's rulings on such matters are classified and almost impossible to challenge because of the secret nature of the proceedings.
Essentially, the court re-defined the word “relevant” to mean “anything and everything.” Sens. Ron Wyden and Mark Udall explained two years ago on the Senate floor that Americans would be shocked if they knew how the government was interpreting the Patriot Act. This is exactly what they were talking about.\n
It’s likely the precedent laid down in the last few years will stay law for years to come if the courts are not reformed. FISA judges are appointed by one unelected official who holds lifetime office: the Chief Justice of the Supreme Court. Under current law, for the coming decades, Chief Justice John Roberts will solely decide who will write the sweeping surveillance opinions few will be allowed to read, but which everyone will be subject to.\n
Judge James Robertson was once one of those judges. He was appointed to the court in the mid-2000s. He confirmed yesterday for the first time that he resigned in 2005 in protest of the Bush administration illegally bypassing the court altogether. Since Robertson retired, however, the court has transitioned from being ignored to wielding enormous, undemocratic power.\n
“What FISA does is not adjudication, but approval,” Judge Robertson said. “This works just fine when it deals with individual applications for warrants, but the [FISA Amendments Act of 2008] has turned the FISA court into administrative agency making rules for others to follow.”\n
Under the FISA Amendments Act, \"the court is now approving programmatic surveillance. I don't think that is a judicial function.” He continued, \"Anyone who has been a judge will tell you a judge needs to hear both sides of a case…This process needs an adversary.\"\n
No opposing counsel, rulings handed down in complete secrecy by judges appointed by an unelected official, and no way for those affected to appeal. As The Economist stated, “Sounds a lot like the sort of thing authoritarian governments set up when they make a half-hearted attempt to create the appearance of the rule of law.”\n
This scandal should precipitate many reforms, but one thing is certain: FISA rulings need to be made public so the American people understand how courts are interpreting their constitutional rights. The very idea of democratic law depends on it.\n
\n
By Greg Epstein, EFF and Global Voices Advocacy Intern\n
Demonstrators in Turkey have occupied Istanbul’s Taksim Square since last May, in a movement that began as an effort to protect a city park, but has evolved into a larger mobilization against the ruling party’s increasingly autocratic stance.\n
Prime Minister Erdogan and the ruling AKP party have used many tools to silence voices of the opposition. On June 15, police began using tear gas and water cannons to clear out the large encampment in the park. But this effort also has stretched beyond episodes of physical violence and police brutality into the digital world, where information control and media intimidation are on the rise.\n
Since the protests began, dozens of Turkish social media users have been detained on charges ranging from inciting demonstrations, to spreading propaganda and false information, to insulting government officials. Dozens more Twitter users were reportedly arrested for posting images of police brutality, though the legal pretense for these arrests is unclear. A recent ruling in an Ankara court ordered 22 demonstrators detained on terrorism-related charges.\n
Prime Minister Erdogan made his view of social media known when he described social media as “the worst menace to society” at a June press conference. It is worth noting that Erdogan himself is said to maintain a Twitter account with over 3 million followers and 2,000 tweets (some Turks question whether the unverified account is really him, or an unofficial supporter.) While the Turkish government has had limited, if any, involvement in tampering with social media access thus far, government officials appear eager to take further action.\n
Roots in traditional media\n
Although current circumstances appear to be testing the limits of Turkey’s information policy framework, the country has a long history of restrictive media policy and practice. In 2013, Turkey ranked 154 out of 166 on the Reporters Without Borders’ Annual Worldwide Press Freedom Index, due in part to the fact that since 1992 18 journalists have been murdered there, 14 with impunity. In responding to protest coverage, authorities have fined, detained and even beaten members of the press. Institutional censorship has also been prevalent: When clashes between protesters and police escalated, activists noted that CNN Turk aired a documentary on penguins while CNN International ran live coverage of the events in Taksim Square.\n
Dubbed the “the world’s biggest prison for journalists” by Reporters Without Borders, Turkey has been particularly aggressive in arresting Kurdish journalists under Turkey’s anti-terrorism law known as Terörle Mücadele Yasası.\n
Controlling digital expression\n
As of 2012, 45% of Turkey’s population had regular access to the Internet. The country’s leading ISP, Türk Telekom (TT), formerly a government-controlled monopoly, was privatized in 2005 but retained a 95% percent market share in 2007. Türk Telekom also controls the country’s only commercial backbone.\n
Internet Law No. 5651, passed in 2007, prohibits online content in eight categories including prostituion, sexual abuse of children, facilitation of the abuse of drugs, and crimes against (or insults to) Atatürk. The law authorizes the Turkish Supreme Council for Telecommunications and IT (TIB) to block a website when it has “adequate suspicion” that the site hosts illegal content. In 2011, the Council of Europe’s Commissioner for Human Rights reported that 80% of online content blocked in Turkey was due to decisions made by the TIB, with the remaining 20% being blocked as the result of orders by Turkey’s traditional court system. In 2009 alone, nearly 200 court decisions found TIB decisions to block websites unjustifiable because they fell outside the scope of Law 5651. The law also has been criticized for authorizing takedowns of entire sites when only a small portion of their content stands in violation of the law.\n
Between 2008 and 2010, YouTube was blocked in its entirety under Law 5651 because of specific videos that fell into the category of “crimes against Atatürk”. During this period, YouTube continued to be the 10th most visited site in Turkey, with users accessing the site through proxies. The ban was eventually lifted when YouTube removed the videos in question and came under compliance with Turkish law. Sites likes Blogspot, Metacafe, Wix and others have gone through similar ordeals in Turkey in recent years. An estimated 31,000 websites are blocked in the country.\n
In December 2012, the European Court of Human Rights (ECHR) found that Turkey had violated their citizen’s right to free expression by blocking Google Sites. While Turkey justified the ban based on Sites’ hosting of websites that violated Law 5651, the ECHR found that Turkish law did not allow for “wholesale blocking of access” to a hosting provider like Google Sites. Furthermore, Google Sites had not been informed that it was hosting “illegal” content.\n
In 2011, Turkey proposed a mandatory online filtering system described as an effort to protect minors and families. This new system, dubbed Güvenli İnternet, or Secure Internet, would block any website that contained keywords from a list of 138 terms deemed inappropriate by telecom authority BTK. The plan was met with public backlash and protests causing the government to re-evaluate the system and eventually offer it as an opt-in service. While only 22,000 of Turkey’s 11 million Internet users have so far opted for the system, opponents of Güvenli İnternet decry it as a form of censorship, disguised as an effort to protect children and families from “objectionable content”.\n
New policies could further restrict social networks\n
As the protests continue, the Turkish government is working to use legal tools already at its disposal to increase control over social network activity. Transportation and Communications Minister Binali Yildirim has called on Twitter to establish a representative office within the country. Legally, this could give the Turkish government greater ability to obtain user data from the company. But these requests have not received a warm response from Twitter, which has developed a reputation for protecting user data in the face of government requests. While Twitter has “turned down” requests from the Turkish government for user data and general cooperation, Minister Yildirim stated that Facebook had responded “positively”. Shortly thereafter, Facebook published a “Fact Check” post that denied cooperation with Turkish officials.\n
Turkey’s Interior Minister Muammer Güler told journalists that “the issue [of social media] needs a separate regulation” and Deputy Prime Minister Bozdag stated that the government had no intention of placing an outright ban on social media, but indicated a desire to outlaw “fake” social media accounts. Sources have confirmed that the Justice Ministry is conducting research and drafting legislation on the issue.\n
New media expert Ozgur Uckan of Istanbul’s Bilgi University noted that “censoring social media sites presents a technical challenge, and that may be why officials are talking about criminalizing certain content, in an effort to intimidate users and encourage self-censorship.”\n
While the details of these new laws remain to be seen, it is likely that they will have some impact on journalistic and activist activities in the country, especially in times of rising public protest and dissent.\n
This is the 8th article in our Spies Without Borders series. The series looks at how the information disclosed in the NSA leaks affect internet users around the world.\n
As we have discussed throughout our Spies Without Borders series, the backlash against the NSA’s global surveillance programs has been strong. From Germany, where activists demonstrated against the mass spying, to Egypt—allegedly one of the NSA’s top targets—where the reaction is largely the same: “I’m not American, but I have rights too.”\n
Indian commentators are no exception. A piece in the Financial Times stated that the revelations highlighted the “moral decline of America,” while another in the Hindu berated India for its “servility” toward the U.S.\n
But the revelations about the NSA’s spying activities have also created an opportunity for Indian activists to speak out about their own country’s practices. As Pranesh Prakash, Policy Director for the Centre for Internet & Society argues in a piece for the Economic Times, Indian surveillance laws and practices have been “far worse” than those in the U.S. Writing for Quartz, Nandagopal J. Nair agrees, saying that “India’s new surveillance network will make the NSA green with envy.”\n
The U.S. has in fact refused Indian requests for real-time access to internet activity routed through U.S.-based Internet sites, and U.S. companies have also stood up to privacy-violating demands. Other companies, such as RIM—the company that owns BlackBerry—have cooperated with the Indian government.\n
Regulatory privacy protections in India are weak: Telecom companies are required by license to provide data to the government, and the use of encryption is extremely limited. As we have previously explained, India service providers are required to ensure that bulk encryption is not deployed. Additionally, no individual or entity can employ encryption with a key longer than 40 bits. If the encryption surpasses this limit, the individual or entity will need prior written permission from the Department of Telecommunications and must deposit the decryption keys with the Department. The limitation on encryption in India means that technically any encrypted material over 40 bits would be accessible by the State. Ironically, the Reserve Bank of India issues security recommendations that banks should use strong encryption as higher as 128-bit for securing browser. In addition to such limitations on the use of encryption, commentators have also raised concerns about the process for lawful intercept.\n
The latest attempt at surveillance by the Indian government has been roundly criticized as “more intrusive” than the NSA’s programs. In the New York Times, Prakash explained the new program, the Centralized Monitoring System or C.M.S.:\n
\nWith the C.M.S., the government will get centralized access to all communications metadata and content traversing through all telecom networks in India. This means that the government can listen to all your calls, track a mobile phone and its user’s location, read all your text messages, personal e-mails and chat conversations. It can also see all your Google searches, Web site visits, usernames and passwords if your communications aren’t encrypted.\n
Notably, India does not have laws allowing for mass surveillance; rather, lawful intercept is covered under the archaic Indian Telegraph Act of 1885 [PDF] and the Information Technology Act of 2000 (IT Act). Under both laws, interception must be time-limited and targeted.\n
In the Times piece, Prakash also lambasts the IT Act, which he says “very substantially lowers the bar for wiretapping.” \n
All of this points to the fact that our fight for privacy is a shared global challenge; or as a columnist for India’s Sunday Guardian recently put it: “We're all now citizens of the surveillance state.”\n
This post is a compliment to one I wrote in August of last year, updating it for Go 1.1. Since last year tools such as goxc have appeared which go a beyond a simple shell wrapper to provide a complete build and distribution solution.\n
Go provides excellent support for producing binaries for foreign platforms without having to install Go on the target. This is extremely handy for testing packages that use build tags or where the target platform is not suitable for development.\n
Support for building a version of Go suitable for cross compilation is built into the Go build scripts; just set the GOOS
, GOARCH
, and possibly GOARM
correctly and invoke ./make.bash
in $GOROOT/src
. Therefore, what follows is provided simply for convenience.\n
1. Install Go from source. The instructions are well documented on the Go website, golang.org/doc/install/source. A summary for those familiar with the process follows.\n
% hg clone https://code.google.com/p/go\n% cd go/src\n% ./all.bash\n
2. Checkout the support scripts from Github, github.com/davecheney/golang-crosscompile\n
% git clone git://github.com/davecheney/golang-crosscompile.git\n% source golang-crosscompile/crosscompile.bash\n
3. Build Go for all supported platforms\n
% go-crosscompile-build-all\ngo-crosscompile-build darwin/386\ngo-crosscompile-build darwin/amd64\ngo-crosscompile-build freebsd/386\ngo-crosscompile-build freebsd/amd64\ngo-crosscompile-build linux/386\ngo-crosscompile-build linux/amd64\ngo-crosscompile-build linux/arm\ngo-crosscompile-build windows/386\ngo-crosscompile-build windows/amd64\n
This will compile the Go runtime and standard library for each platform. You can see these packages if you look in go/pkg
.\n
% ls -1 go/pkg \ndarwin_386\ndarwin_amd64\nfreebsd_386\nfreebsd_amd64\nlinux_386\nlinux_amd64\nlinux_arm\nobj\ntool\nwindows_386\nwindows_amd64\n
Sourcing crosscompile.bash
provides a go-$GOOS-$GOARCH
function for each platform, you can use these as you would the standard go
tool. For example, to compile a program to run on linux/arm
.\n
% cd $GOPATH/github.com/davecheney/gmx/gmxc\n% go-linux-arm build \n% file ./gmxc \n./gmxc: ELF 32-bit LSB executable, ARM, version 1 (SYSV), \nstatically linked, not stripped\n
This file is not executable on the host system (darwin/amd64
), but will work on linux/arm
.\n
This post describes how to produce an environment that will build Go programs for your target environment, it will not however build a Go environment for your target. For that, you must build Go directly on the target platform. For most platforms this means installing from source, or using a version of Go provided by your operating systems packaging system.
\nIf you are using\n
It is currently not possible to produce a cgo
enabled binary when cross compiling from one operating system to another. This is because packages that use cgo
invoke the C compiler directly as part of the build process to compile their C code and produce the C to Go trampoline functions. At the moment the name of the C compiler is hard coded to gcc
, which assumes the system default gcc compiler even if a cross compiler is installed.\n
In Go 1.1 this restriction was reinforced further by making CGO_ENABLED
default to 0
(off) when any cross compilation was attempted.\n
Because some arm platforms lack a hardware floating point unit the GOARM
value is used to tell the linker to use hardware or software floating point code. Depending on the specifics of the target machine you are building for, you may need to supply this environment value when building.\n
% GOARM=5 go-linux-arm build\n
As of e4b20018f797
you will at least get a nice error telling you which GOARM
value to use.\n
$ ./gmxc \nruntime: this CPU has no floating point hardware, so it cannot \nrun this GOARM=7 binary. Recompile using GOARM=5.\n
By default, Go assumes a hardware floating point unit if no GOARM
value is supplied. You can read more about Go on linux/arm
on the Go Language Community Wiki.","direction":"ltr"},"title":"An introduction to cross compilation with Go 1.1","published":1373325259000,"alternate":[{"href":"http://dave.cheney.net/2013/07/09/an-introduction-to-cross-compilation-with-go-1-1","type":"text/html"}],"author":"Dave Cheney","crawled":1373326936982,"origin":{"streamId":"feed/http://dave.cheney.net/feed","htmlUrl":"http://dave.cheney.net","title":"Dave Cheney"},"summary":{"content":"This post is a compliment to one I wrote in August of last year, updating it for Go 1.1. Since last year tools such as goxc have appeared which go a beyond a simple shell wrapper to provide a complete build and distribution solution. Introduction Go provides excellent support for producing binaries for foreign platforms [...]","direction":"ltr"},"unread":true},{"id":"BNU7PBBhwHi0trgjATqXMRVLGGikmyZQMhiBOKF3WjM=_13fb7a49547:1a07f27:1fba5d06","originId":"http://dave.cheney.net/?p=655","fingerprint":"e20fa0ad","keywords":["Go","Programming","benchmarking","profiling"],"content":{"content":"
The Go runtime has built in support for several types of profiling that can be used to inspect the performance of your programs. A common way to leverage this support is via the testing
package, but if you want to profile a full application it is sometimes complicated to configure the various profiling mechanisms.\n
I wrote profile to scratch my own itch and create a simple way to profile an existing Go program without having to restructure it as a benchmark.\n
profile is go get
able so installation is a simple as\n
go get github.com/davecheney/profile\n
Enabling profiling in your application is as simple as one line at the top of your main
function\n
import \"github.com/davecheney/profile\"\n\nfunc main() {\n defer profile.Start(profile.CPUProfile).Stop()\n ...\n}\n
What to profile is controlled by the *profile.Config
value passed to profile.Start
. A nil
*profile.Config
is the same as choosing all the defaults. By default no profiles are enabled.\n
In this more complicated example a *profile.Config
is constructed by hand which enables memory profiling, but disables the shutdown hook.\n
import \"github.com/davecheney/profile\"\n\nfunc main() {\n cfg := profile.Config {\n MemProfile: true,\n NoShutdownHook: true, // do not hook SIGINT\n }\n // p.Stop() must be called before the program exits to \n // ensure profiling information is written to disk.\n p := profile.Start(&cfg)\n ...\n}\n
Several convenience variables are provided for cpu, memory, and block (contention) profiling.\n
For more complex options, consult the documentation on the profile.Config
type. Enabling more than one profile may cause your results to be less reliable as profiling itself is not without overhead.\n
To show profile in action, I modified cmd/godoc
following the instructions in the first example.\n
% godoc -http=:8080\n2013/07/07 15:29:11 profile: cpu profiling enabled, /tmp/profile002803/cpu.pprof\n
In another window I visited http://localhost:8080
a few times to have some profiling data to record, then stopped godoc
.\n
^C2013/07/07 15:29:33 profile: caught interrupt, stopping profiles\n% go tool pprof $(which godoc) /tmp/profile002803/cpu.pprof\nWelcome to pprof! For help, type 'help'.\n(pprof) top10\nTotal: 15 samples\n 2 13.3% 13.3% 2 13.3% go/scanner.(*Scanner).next\n 2 13.3% 26.7% 2 13.3% path.Clean\n 1 6.7% 33.3% 3 20.0% go/scanner.(*Scanner).Scan\n 1 6.7% 40.0% 1 6.7% main.hasPathPrefix\n 1 6.7% 46.7% 3 20.0% main.mountedFS.translate\n 1 6.7% 53.3% 1 6.7% path.Dir\n 1 6.7% 60.0% 1 6.7% path/filepath.(*lazybuf).append\n 1 6.7% 66.7% 1 6.7% runtime.findfunc\n 1 6.7% 73.3% 2 13.3% runtime.makeslice\n 1 6.7% 80.0% 2 13.3% runtime.mallocgc\n
profile is available under a BSD licence.","direction":"ltr"},"title":"Introducing profile, super simple profiling for Go programs","published":1373175683000,"alternate":[{"href":"http://dave.cheney.net/2013/07/07/introducing-profile-super-simple-profiling-for-go-programs","type":"text/html"}],"author":"Dave Cheney","crawled":1373175584071,"origin":{"streamId":"feed/http://dave.cheney.net/feed","htmlUrl":"http://dave.cheney.net","title":"Dave Cheney"},"summary":{"content":"Introduction The Go runtime has built in support for several types of profiling that can be used to inspect the performance of your programs. A common way to leverage this support is via the testing package, but if you want to profile a full application it is sometimes complicated to configure the various profiling mechanisms. [...]","direction":"ltr"},"unread":true},{"id":"J04hLsLffn9GYO0/gp4Z4+C/kSDo3Uxwtg7Qn1jJCK4=_13fa693749b:503b:eacbe387","originId":"/blog/2013/06/23/ember-1-0-rc6.html","fingerprint":"e695138f","content":{"content":"
Ember.js 1.0 RC6 has been released and is available from the\nmain website and at builds.emberjs.com. This\nrelease features two big changes: 1) router update 2) Ember Components.\n
Router Update\n
The biggest change is router update (aka \"router facelift\"), which addresses\ntwo major issues. The first was inconsistent semantics between URL-based transitions\nand transitionTo
. The second was spotty async support which made it difficult to\nprevent or delay route entry in cases such as authentication and async code-loading.\n
We have now harmonized URL changes and transitionTo
semantics and more fully embraced\nasynchrony using promises.\n
Additionally, router transitions have become first-class citizens and there are\nnew hooks to prevent or decorate transitions:\n
willTransition
: fires on current routes whenever a transition is about to take place\nbeforeModel
/model
/afterModel
: hooks fired during the async validation phase\n
Finally there is an error
event which fires whenever there is a rejected promise or\nerror thrown in any of the beforeModel
/model
/afterModel
hooks.\n
For more on new router features, see:\n
Special thanks to Alex Matchneer for his work on this!\n
Ember Components\n
The other major change is the introduction of Ember Components, which shares Web\nComponents' goal of facilitating creation of reusable higher-level page elements.\n
Ember Components will generally consist of a template
and a view
which encapsulate the template
's\nproperty access and actions. Any reference to outside constructs is handled through context\ninfo passed into the view
. Components can be customized through subclassing.\n
Ember Components naming conventions are: 1) the template
's name begins with 'components/', and 2) the\nComponent's name must include a '-' (this latter convention is consistent with Web Components standards,\nand prevents name collisions with built-in controls that wrap HTML elements). As an example, a component\nmight be named 'radio-button'
. Its template
would be 'components/radio-button'
and you would call\nit as {{radio-button}}
in other templates
.\n
Stay tuned for more docs and examples of this exciting new feature.","direction":"ltr"},"title":"Ember 1.0 RC6","updated":1371945600000,"published":1371945600000,"author":"Ember","crawled":1372889248923,"origin":{"streamId":"feed/http://emberjs.com/blog/feed.xml","htmlUrl":"http://emberjs.com/blog","title":"Ember Blog"},"summary":{"content":"
Ember.js 1.0 RC6 has been released and is available from the\nmain website and at builds.emberjs.com. This\nrelease features two big changes: 1) router update 2) Ember Components.\n
Router Update\n
The biggest change is router update (aka \"router facelift...","direction":"ltr"},"unread":true},{"id":"BNU7PBBhwHi0trgjATqXMRVLGGikmyZQMhiBOKF3WjM=_13fa12b9416:28475e:f5ac5ed","originId":"http://dave.cheney.net/?p=650","fingerprint":"dc7ecd3d","keywords":["Go","Programming","arm","tarball"],"content":{"content":"
This evening I rebuilt my unofficial ARM tarball distributions to Go version 1.1.1.\n
You can find them by following the link in the main header of this page.","direction":"ltr"},"title":"Unofficial Go 1.1.1 tarballs for ARM now available","published":1372766216000,"alternate":[{"href":"http://dave.cheney.net/2013/07/02/unofficial-go-1-1-1-tarballs-for-arm-now-available","type":"text/html"}],"author":"Dave Cheney","crawled":1372798555158,"origin":{"streamId":"feed/http://dave.cheney.net/feed","htmlUrl":"http://dave.cheney.net","title":"Dave Cheney"},"summary":{"content":"This evening I rebuilt my unofficial ARM tarball distributions to Go version 1.1.1. You can find them by following the link in the main header of this page.","direction":"ltr"},"unread":true},{"recrawled":1372565864356,"id":"BNU7PBBhwHi0trgjATqXMRVLGGikmyZQMhiBOKF3WjM=_13f92dee4c0:494f6d:2ae58ca9","fingerprint":"b5d314e9","originId":"http://dave.cheney.net/?p=612","keywords":["Go","Programming","benchmark","testing"],"content":{"content":"
This post continues a series on the testing
package I started a few weeks back. You can read the previous article on writing table driven tests here. You can find the code mentioned below in the https://github.com/davecheney/fib repository.\n
The Go testing
package contains a benchmarking facility that can be used to examine the performance of your Go code. This post explains how to use the testing
package to write a simple benchmark.\n
You should also review the introductory paragraphs of Profiling Go programs, specifically the section on configuring power management on your machine. For better or worse, modern CPUs rely heavily on active thermal management which can add noise to benchmark results.\n
We’ll reuse the Fib
function from the previous article.\n
func Fib(n int) int {\n if n < 2 {\n return n\n }\n return Fib(n-1) + Fib(n-2)\n}\n
Benchmarks are placed inside _test.go
files and follow the rules of their Test
counterparts. In this first example we’re going to benchmark the speed of computing the 10th number in the Fibonacci series.\n
// from fib_test.go\nfunc BenchmarkFib10(b *testing.B) {\n // run the Fib function b.N times\n for n := 0; n < b.N; n++ {\n Fib(10)\n }\n}\n
Writing a benchmark is very similar to writing a test as they share the infrastructure from the testing
package. Some of the key differences are\n
Benchmark
not Test
.\ntesting
package. The value of b.N
will increase each time until the benchmark runner is satisfied with the stability of the benchmark. This has some important ramifications which we’ll investigate later in this article.\nb.N
times. The for
loop in BenchmarkFib10
will be present in every benchmark function.\nNow that we have a benchmark function defined in the tests for the fib
package, we can invoke it with go test -bench=.
\n
% go test -bench=.\nPASS\nBenchmarkFib10 5000000 509 ns/op\nok github.com/davecheney/fib 3.084s\n
Breaking down the text above, we pass the -bench
flag to go test
supplying a regular expression matching everything. You must pass a valid regex to -bench
, just passing -bench
is a syntax error. You can use this property to run a subset of benchmarks.\n
The first line of the result, PASS
, comes from the testing portion of the test driver, asking go test
to run your benchmarks does not disable the tests in the package. If you want to skip the tests, you can do so by passing a regex to the -run
flag that will not match anything. I usually use\n
go test -run=XXX -bench=.\n
The second line is the average run time of the function under test for the final value of b.N
iterations. In this case, my laptop can execute Fib(10)
in 509 nanoseconds. If there were additional Benchmark
functions that matched the -bench
filter, they would be listed here.\n
As the original Fib
function is the classic recursive implementation, we’d expect it to exhibit exponential behavior as the input grows. We can explore this by rewriting our benchmark slightly using a pattern that is very common in the Go standard library.\n
func benchmarkFib(i int, b *testing.B) {\n for n := 0; n < b.N; n++ {\n Fib(i)\n }\n}\n\nfunc BenchmarkFib1(b *testing.B) { benchmarkFib(1, b) }\nfunc BenchmarkFib2(b *testing.B) { benchmarkFib(2, b) }\nfunc BenchmarkFib3(b *testing.B) { benchmarkFib(3, b) }\nfunc BenchmarkFib10(b *testing.B) { benchmarkFib(10, b) }\nfunc BenchmarkFib20(b *testing.B) { benchmarkFib(20, b) }\nfunc BenchmarkFib40(b *testing.B) { benchmarkFib(40, b) }\n
Making benchmarkFib
private avoids the testing driver trying to invoke it directly, which would fail as its signature does not match func(*testing.B)
. Running this new set of benchmarks gives these results on my machine.\n
BenchmarkFib1 1000000000 2.84 ns/op\nBenchmarkFib2 500000000 7.92 ns/op\nBenchmarkFib3 100000000 13.0 ns/op\nBenchmarkFib10 5000000 447 ns/op\nBenchmarkFib20 50000 55668 ns/op\nBenchmarkFib40 2 942888676 ns/op\n
Apart from confirming the exponential behavior of our simplistic Fib
function, there are some other things to observe in this benchmark run.\n
Benchmark
function returns, the value of b.N
is increased in the sequence 1, 2, 5, 10, 20, 50, … and the function run again.\nBenchmarkFib40
only ran two times with the average was just under a second for each run. As the testing
package uses a simple average (total time to run the benchmark function over b.N
) this result is statistically weak. You can increase the minimum benchmark time using the -benchtime
flag to produce a more accurate result.\n% go test -bench=Fib40 -benchtime=20s\nPASS\nBenchmarkFib40 50 944501481 ns/op\n\n
Above I mentioned the for
loop is crucial to the operation of the benchmark driver. Here are two examples of a faulty Fib
benchmark.\n
func BenchmarkFibWrong(b *testing.B) {\n for n := 0; n < b.N; n++ {\n Fib(n)\n }\n}\n\nfunc BenchmarkFibWrong2(b *testing.B) {\n Fib(b.N)\n}\n
On my system BenchmarkFibWrong
never completes. This is because the run time of the benchmark will increase as b.N
grows, never converging on a stable value. BenchmarkFibWrong2
is similarly affected and never completes.\n
Before concluding I wanted to highlight that to be completely accurate, any benchmark should be careful to avoid compiler optimisations eliminating the function under test and artificially lowering the run time of the benchmark.\n
var result int\n\nfunc BenchmarkFibComplete(b *testing.B) {\n var r int\n for n := 0; n < b.N; n++ {\n // always record the result of Fib to prevent\n // the compiler eliminating the function call.\n r = Fib(10)\n }\n // always store the result to a package level variable\n // so the compiler cannot eliminate the Benchmark itself.\n result = r\n}\n
The benchmarking facility in Go works well, and is widely accepted as a reliable standard for measuring the performance of Go code. Writing benchmarks in this manner is an excellent way of communicating a performance improvement, or a regression, in a reproducible way.","direction":"ltr"},"alternate":[{"href":"http://dave.cheney.net/2013/06/30/how-to-write-benchmarks-in-go","type":"text/html"}],"author":"Dave Cheney","crawled":1372558648512,"origin":{"streamId":"feed/http://dave.cheney.net/feed","htmlUrl":"http://dave.cheney.net","title":"Dave Cheney"},"published":1372557972000,"summary":{"content":"This post continues a series on the testing package I started a few weeks back. You can read the previous article on writing table driven tests here. You can find the code mentioned below in the https://github.com/davecheney/fib repository. Introduction The Go testing package contains a benchmarking facility that can be used to examine the performance [...]","direction":"ltr"},"title":"How to write benchmarks in Go","unread":true},{"id":"BNU7PBBhwHi0trgjATqXMRVLGGikmyZQMhiBOKF3WjM=_13f59dcd0ef:41f8b:f1c0ed2","fingerprint":"2fc77cae","originId":"http://dave.cheney.net/?p=608","keywords":["Go","Programming","concurrency","race","testing"],"content":{"content":"
This is a short post on stress testing your Go packages. Concurrency or memory correctness errors are more likely to show up at higher concurrency levels (higher values of GOMAXPROCS
). I use this script when testing my packages, or when reviewing code that goes into the standard library.\n
#!/usr/bin/env bash -e\n\ngo test -c\n# comment above and uncomment below to enable the race builder\n# go test -c -race\nPKG=$(basename $(pwd))\n\nwhile true ; do \n export GOMAXPROCS=$[ 1 + $[ RANDOM % 128 ]]\n ./$PKG.test $@ 2>&1\ndone\n
I keep this script in $HOME/bin
so usage is\n
$ cd $SOMEPACKAGE\n$ stress.bash\nPASS\nPASS\nPASS\n
You can pass additional arguments to your test binary on the command line,\n
stress.bash -test.v -test.run=ThisTestOnly\n
The goal is to be able to run the stress test for as long as you want without a test failure. Once you achieve that, uncomment go test -c -race
and try again.","direction":"ltr"},"title":"Stress test your Go packages","published":1371599292000,"alternate":[{"href":"http://dave.cheney.net/2013/06/19/stress-test-your-go-packages","type":"text/html"}],"author":"Dave Cheney","crawled":1371602211055,"origin":{"streamId":"feed/http://dave.cheney.net/feed","htmlUrl":"http://dave.cheney.net","title":"Dave Cheney"},"summary":{"content":"This is a short post on stress testing your Go packages. Concurrency or memory correctness errors are more likely to show up at higher concurrency levels (higher values of GOMAXPROCS). I use this script when testing my packages, or when reviewing code that goes into the standard library. #!/usr/bin/env bash -e go test -c # [...]","direction":"ltr"},"unread":true},{"recrawled":1371602211055,"id":"BNU7PBBhwHi0trgjATqXMRVLGGikmyZQMhiBOKF3WjM=_13f52f80b5c:a18618:a7cd00","fingerprint":"ced7c14","originId":"http://dave.cheney.net/?p=526","keywords":["Go","Programming","centos","redhat","unsupported"],"content":{"content":"
Important Go 1.0 or 1.1 has never supported RHEL5 or CentOS5. Please do not interpret anything in this article as a statement that Go does support RHEL5 or CentOS5.\n
Go has never supported RedHat 5 or CentOS 5. We’ve been pretty good at getting that message out, but it still catches a few people by surprise. The reason these old releases are not supported is the Linux kernel that ships with them, a derivative of 2.6.18, does not provide three facilities required by the Go runtime. \n
These are\n
O_CLOEXEC
flag passed to open(2)
. We attempt to work around this in the os.OpenFile
function, but not all kernels that do not support this flag return an error telling us they don’t support it. The result on RHEL5/CentOS5 systems is file descriptors can leak into child processes, this isn’t a big problem, but does cause test failures.\naccept4(2)
. accept4(2)
was introduced in kernel 2.6.28 to allow O_CLOEXEC
to be set on newly accepted socket file descriptors. In the case that this syscall is not supported, we fall back to the older accept(2)
syscall at a small performance hit.\nclock_gettime(2)
. vDSO is a way of projecting a small part of the kernel into your process address space. This means you can call certain syscalls (known as vsyscalls) without the cost of a trap into kernel space or a context switch. Go uses clock_gettime(2)
via the vDSO in preference to the older gettimeofday(2)
syscall as it is both faster, and higher precision.\nAs RHEL5/CentOS5 are not supported, there are no binary packages available on the golang.org website. To install Go you will need to use the source tarball, in this case we’re using the Go 1.1.1 release. I’m using a CentOS 5.9 amd64 image running in a vm.\n
The packages required to build Go on RedHat platforms are listed on the Go community wiki.\n
$ sudo yum install gcc glibc-devel\n
We’re going to download the Go 1.1.1 source distribution and unpack it to $HOME/go
.\n
$ curl https://go.googlecode.com/files/go1.1.1.src.tar.gz | tar xz\n % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n100 8833k 100 8833k 0 0 710k 0 0:00:12 0:00:12 --:--:-- 974k\n$ ls\nDesktop go\n
$ cd go/src\n$ ./make.bash\n# Building C bootstrap tool.\ncmd/dist\n\n# Building compilers and Go bootstrap tool for host, linux/amd64.\nlib9\nlibbio\n...\nInstalled Go for linux/amd64 in /home/dfc/go\nInstalled commands in /home/dfc/go/bin\n
go
to PATH
$ export PATH=$PATH:$HOME/go/bin\n$ go version\ngo version go1.1.1 linux/amd64\n
As described above, RHEL5/CentOS5 are not supported as their kernel is too old. Here are some of the known issues. As RHEL5/CentOS5 are unsupported, they will not be fixed. \n
You’ll notice above to build Go I ran make.bash
, not the recommended all.bash
, to skip the tests. Due to the lack of working O_CLOEXEC
support, some tests will fail. This is a known issue and will not be fixed.\n
$ ./run.bash\n...\n--- FAIL: TestExtraFiles (0.05 seconds)\n exec_test.go:302: Something already leaked - closed fd 3\n exec_test.go:359: Run: exit status 1; stdout \"leaked parent file. fd = 10; want 9\\n\", stderr \"\"\nFAIL\nFAIL os/exec 0.489s\n
A some point during the RHEL5 release cycle support for vDSO vsyscalls was added to RedHat’s 2.6.18 kernel. However that point appears to differ by point release. For example, for RedHat 5, kernel 2.6.18-238.el5 does not work, whereas 2.6.18-238.19.1.el5 does. Running CentOS 5.9 with kernel 2.6.18.348.el5 does work.\n
$ ./make.bash\n...\ncmd/go\n./make.bash: line 141: 8269 segmentfault \"$GOTOOLDIR\"/go_bootstrap clean -i std\n
In summary, if the your Go programs crash or segfault using RHEL5/CentOS5, you should try upgrading to the latest kernel available for your point release. I’ll leave the comments on this article open for a while so people can contribute their known working kernel versions, perhaps I can build a (partial) table of known good configurations.","direction":"ltr"},"title":"How to install Go 1.1 on CentOS 5.9","published":1371486466000,"alternate":[{"href":"http://dave.cheney.net/2013/06/18/how-to-install-go-1-1-on-centos-5","type":"text/html"}],"author":"Dave Cheney","crawled":1371486554972,"origin":{"streamId":"feed/http://dave.cheney.net/feed","htmlUrl":"http://dave.cheney.net","title":"Dave Cheney"},"summary":{"content":"Important Go 1.0 or 1.1 has never supported RHEL5 or CentOS5. Please do not interpret anything in this article as a statement that Go does support RHEL5 or CentOS5. Introduction Go has never supported RedHat 5 or CentOS 5. We’ve been pretty good at getting that message out, but it still catches a few people by surprise. [...]","direction":"ltr"},"unread":true}]}