Verizon Wireless is getting slapped with a fine and privacy requirements after inserting undeletable tracking cookies into users’ browsing sessions.
As part of a settlement with the Federal Communications Commission, Verizon will have to get users’ permission to share these “supercookies” with third-party partners. However, users will still have to opt out of tracking by Verizon itself. Verizon will be notifying subscribers about the changes, and has also agreed to a $1.35 million fine.
With tracking cookies, users are assigned a unique identifier that’s tied to their web activity, building up anonymized profiles that advertisers can target. But unlike conventional tracking cookies, which users can erase or avoid by opening a private browsing session, supercookies or “perma-cookies” cannot easily be deleted.
Consumer groups such as Electronic Frontier Foundation started raising concerns about Verizon’s supercookies in late 2014, and lawmakers soon began calling on the FCCto investigate. In March 2015, Verizon added a way for wireless subscribers to opt out of the tracking through its privacy settings page.
Currently, Verizon is the only U.S. wireless provider that uses supercookies, though several other telcos around the world also track their users this way. AT&T had tested a similar program in the United States, but cancelled it in late 2014 as negative attention piled up.
The FCC is chalking up the settlement to its Open Internet order, also known as its net neutrality rules. Under these rules, wired and wireless service providers must be transparent about their network management and terms so users can make informed decisions. The FCC claims that Verizon violated those transparency requirements by failing to disclose its use of supercookies.
Why this matters: The settlement is a partial victory for Verizon subscribers, but still requires them to take the final step of opting out to disable supercookies entirely. Given that Verizon now owns a major ad network through its acquisition of AOL last year, the company will likely do plenty of its own targeted advertising even for users who haven’t opted into third-party tracking
Yesterday a new website and database launched called NoFlyZone.org, which invites people to enter their home address to prevent amateur drone pilots from zooming over their property, GoPro running. The database is maintained by a team of 10 people based in El Segundo, California, and led by Ben Marcus, a private pilot and drone operator.
Although it’s completely voluntary for drone companies to agree to honor the no-fly zone requests, at least seven drone hardware, operating system, and component makers have agreed to incorporate the NoFlyZone data into their product in some way. These include EHANG, Horizon Hobby, DroneDeploy, HEXO+, PixiePath, and RCFlyMaps. [Correction: YUNEEC was perviously listed by NoFlyZone as a partner, but it insists it only prevents its drones from flying near airports.]
”It is up to each of those companies to implement and use the data in a way that works best for their technology and customers,” Marcus told Ars Technica in an e-mail. “Some of the companies in our coalition are drone manufacturers who can create a virtual barrier, or geo-fence, around each property. Others, like DroneDeploy and PixiePath, are operating system providers who will make our comprehensive database of no-fly zones available to its end-users.”
Registering your home on NoFlyZone requires an exact address, although if Google Maps turns up the wrong address you can drag the resulting dropped pin to the property you actually intend to add to the list. You must provide a valid e-mail address to confirm registration, as well. NoFlyZone is also working to get addresses of schools, hospitals, military installations and so forth added onto the list to make it comprehensive for drone equipment makers.
Marcus told Ars that in the past 24 hours, more than 10,000 people have registered their properties to the database.
Still, NoFlyZone can’t yet prevent all drones from flying over your house, although Marcus says he hopes NoFlyZone will become more important to drone makers in the future. “The ideal vision is a blend of manufacturer, operator and consumer alliance–through this initiative, we’re opening lines of communication surrounding the safe and responsible use of drones,” Marcus wrote. “As a flight and drone enthusiast, I’m invested in the success of this industry, and know responsible drone usage is paramount in ensuring that future.”
Marcus added that the list of options NoFlyZone offers will likely expand in time, allowing customers to choose whether they’d like to permit package deliveries by drone, but don’t want camera-enabled drones over their property, for example.
[Update: Marcus got back to Ars about some of the issues with NoFlyZone and multiple residencies. His responses are inline in this paragraph.] There are still some details that may need to be meted out. With respect to residency verification Marcus wrote: “We are learning a lot in the first 36 hours post-launch. As we go forward, we will continually seek to improve NoFlyZone and are evaluating if additional residency validation and authentication is necessary. We will include additional residency verification when we introduce the capability to customize airspace access settings, such as to receive packages by drone.” Marcus also said that people who are new residents to a property can remove their property via the NoFlyZone contact page, after they “upload proof of residency, such as a utility bill, or e-mail us from the same e-mail address they used when they registered the property.”
The concept of off-limits spaces for drones is just emerging as amateur drone piloting is gaining popularity. In January, a National Geospatial-Intelligence Agency employee drunkenly crashed a DJi Phantom 2 drone on the White House lawn, setting off a Secret Service investigation and eliciting some chiding from the president. The next day, DJi promised to update its drones’ firmware, adding “a No-Fly Zone centered on downtown Washington, DC” extending “for a 25 kilometer (15.5 mile) radius in all directions.”
”Phantom pilots in this area will not be able to take off from or fly into this airspace,” the company added.
A new wave of documents from Edward Snowden’s cache of National Security Agency data published by Der Spiegel demonstrates how the agency has used its network exploitation capabilities both to defend military networks from attack and to co-opt other organizations’ hacks for intelligence collection and other purposes. In one case, the NSA secretly tapped into South Korean network espionage on North Korean networks to gather intelligence.
The documents were published as part of an analysis by Jacob Appelbaum and others working for Der Spiegel of how the NSA has developed an offensive cyberwarfare capability over the past decade. According to a report by the New York Times, the access the NSA gained into North Korea’s networks—which initially leveraged South Korean “implants” on North Korean systems, but eventually consisted of the NSA’s own malware—played a role in attributing the attack on Sony Pictures to North Korean state-sponsored actors.
Included with the documents released by Der Spiegel are details on how the NSA built up its Remote Operations Center to carry out “Tailored Access Operations” on a variety of targets, while also building the capability to do permanent damage to adversaries’ information systems, including internal NSA newsletter interviews and training materials. Also included was a malware sample for a keylogger, apparently developed by the NSA and possibly other members of the “Five Eyes” intelligence community, which was also included in the dump. The code appears to be from the Five Eyes joint program “Warriorpride,” a set of tools shared by the NSA, the United Kingdom’s GCHQ, the Australian Signals Directorate, Canada’s Communications Security Establishment, and New Zealand’s Government Communications Security Bureau.
It’s not clear from the report whether the keylogger sample came from the cache of documents provided by former NSA contractor Edward Snowden or from another source. As of now, Appelbaum and Der Spiegel have not yet responded to a request by Ars for clarification. However, Appelbaum has previously published content from the NSA, including the NSA’s ANT catalog of espionage tools, that were apparently not from the Snowden cache.
Pwning the pwners
The core of the NSA’s ability to detect, deceive, block, and even repurpose others’ cyber-attacks, according to the documents, are Turbine and Turmoil, components of the Turbulence family of Internet surveillance and exploitation systems. These systems are also connected to Tutelage, an NSA system used to monitor traffic to and from US military networks, to defend against attacks on Department of Defense systems.
When an attack on a DoD network is detected through passive surveillance (either through live alerts from the Turmoil surveillance filters or processing by the Xkeyscore database), the NSA can identify the components involved in the attack and take action to block it, redirect it to a false target to analyze the malware used in the attack, or do other things to disrupt or deceive the attacker. This all happens outside of DOD’s networks, on the public Internet, using “Quantum” attacks injected into network traffic at a routing point.
But the NSA can also use others’ cyberattacks for its own purposes, including hijacking botnets operated by other actors to spread the NSA’s own “implant” malware. Collection of intelligence of a target using another actor’s hack of that target is referred to within the signals intelligence community as “fourth party collection.” By discovering an active exploit by another intelligence organization or other attacker on a target of interest, the NSA can opportunistically ramp up collection on that party as well, or even use it to distribute its own malware to do surveillance.
In a case study covered in one NSA presentation, the NSA’s Tailored Access Office hijacked a botnet known by the codename “Boxingrumble” that had primarily targeted the computers of Chinese and Vietnamese dissidents and was being used to target the DOD’s unclassified NIPRNET network. The NSA was able to deflect the attack and fool the botnet into treating one of TAO’s servers as a trusted command and control (C&C or C2) server. TAO then used that position of trust, gained by executing a DNS spoofing attack injected into the botnet’s traffic, to gather intelligence from the bots and distribute the NSA’s own implant malware to the targets.
The Tor Project has flagged a server in Russia after a security researcher found it slipped in malware when users were downloading files.
Tor is short for The Onion Router, which is software that offers users a greater degree of privacy when browsing the Internet by routing traffic through a network of worldwide servers. The system is widely used by people who want to conceal their real IP address and mask their web browsing.
The suspicious server was an “exit node” for Tor, which is the last server in the winding chain used to direct web browsing traffic to its destination.
Roger Dingledine, Tor Project’s project leader and director, wrote the Russian server has been labeled a bad exit node, which should mean Tor clients will avoid using the server.
The Russian server was found by Josh Pitts, who does penetration testing and security assessments with Leviathan Security Group. He wrote he wanted to find out how common it was to find attackers modifying the binaries of legitimate code in order to deliver malware.
Binaries from large software companies have digital signatures that can be verified to make sure the code hasn’t been modified. But Pitts wrote most code isn’t signed, and even further, most don’t employ TLS (Transport Layer Security) during downloading. TLS is the successor to SSL (Secure Sockets Layer), which encrypts connections between a client and a server.
He suspected attackers were “patching” binaries during man-in-the-middle attacks and took a look at more than 1,110 Tor exit nodes.
Pitts only found one Tor exit node that was patching binaries. The node would modify only uncompressed portable executables, he wrote.
“This does not mean that other nodes on the Tor network are not patching binaries; I may not have caught them, or they may be waiting to patch only a small set of binaries,” he wrote.
The broad lesson for users is that they should be wary of downloading code that is not protected by SSL/TLS, even if the binary itself is digitally signed, Pitts wrote.
“All people, but especially those in countries hostile to ‘Internet freedom,’ as well as those using Tor anywhere, should be wary of downloading binaries hosted in the clear—and all users should have a way of checking hashes and signatures out of band prior to executing the binary,” he wrote.
Vulnerabilities in the Tails operating system could reveal your IP address, but you can avoid trouble by taking a couple of precautions.
Tails, a portable operating system that employs a host of privacy-focused components, plans to patch flaws contained in I2P, a networking tool developed by the Invisible Internet Project that provides greater anonymity when browsing. It’s similar in concept to Tor.
On Saturday, I2P developers released several fixes for XSS (cross-site scripting) and remote execution flaws found by Exodus Intelligence, a vulnerability broker that irked some by announcing first on Twitter it knew of flaws but didn’t immediately inform Tails.
It wasn’t clear when Tails would release an update with I2P’s fixes. It couldn’t be immediately reached Sunday.
On Friday, Tails advised that users can take steps to protect themselves in the meantime. It recommended that I2P not be intentionally launched in Tails version 1.1 and earlier.
Luckily, I2P is not launched by default when Tails is started. But Tails warned that an attacker could use some other undisclosed security holes to launch Tails and then try to de-anonymize a user. To be sure that doesn’t happen, the I2P software package should be removed when Tails is launched.
The danger of hackers using the I2P vulnerabilities is mitigated somewhat by the fact the details of the flaws haven’t been disclosed publicly. But Tails wrote that hackers may have figured them out.
Even general descriptions of vulnerabilities often give hackers enough information of where to start hunting for flaws, enabling them to figure out the exact problems.
To execute an attack on I2P, a hacker must also lure someone to a website where they’ve manipulated the content, Tails said. That sort of lure is usually set using social engineering, successfully tricking a person into loading malicious content. Savvy users may spot such a lure, but it’s easy to get tricked.
Soon after it wrote on Twitter of the flaws, Exodus Intelligence said it would provide the details to Tails and not sell the information to its customers. It wasn’t clear if public pressure influenced Exodus.
The company wouldn’t say if it would make similar exceptions for privacy-focused software in the future such as Tails, which has been recommended by former National Security Agency contractor Edward Snowden.
During a recent hacker conference, forensic scientist and iPhone jailbreaking expert Jonathan Zdziarski outlined a number of undocumented high-value forensic services running on every iOS device. He also found suspicious design omissions in iOS that make data collection easier according to a report from ZDNet.
Zdziarski notes that while Apple has worked hard to make iOS devices reasonably secure against typical attackers, they’ve also put a lot of time and planning into making devices accessible on their end on behalf of law enforcement.
The hacker also found that screen-locking an iPhone doesn’t encrypt its data. The only real way to do this is to shut down / power off the handset. What’s more, some of the undocumented services are able to bypass backups and can be accessed using USB, Wi-Fi or perhaps even cellular.
Using commercially available forensics tools, for example, law enforcement could gain access to a device during a routine traffic stop or during an arrest before a suspect is able to power the phone off.
Zdziarski finds it suspicious that none of these services (“lockdownd,” “pcapd” or “mobile.file_relay”) are referenced in any Apple software. The data they collect is personal in nature thus unlikely to be used for debugging purposes and is stored in raw format to make it useless to wireless carriers or during a trip to a Genius Bar.
All said and done, Zdziarski is left with more questions than answers.
Three stealthy tracking mechanisms designed to avoid weaknesses in browser cookies pose potential privacy risks to Internet users, a new research paper has concluded.
The methods—known as canvas fingerprinting, evercookies and cookie syncing—are in use across a range of popular websites. The findings, first reported by Pro Publica, show how such tracking is important for targeted advertising but that the privacy risks may be unknown to all but the most sophisticated web users.
Profiling Web users, such as knowing what Web pages a person has visited before, is a central component of targeted advertising, which matches advertisements with topics a person may be interested in. It is key to charging higher rates for advertisements.
Cookies, or data files stored by a browser, have long been used for tracking, but cookies can be easily blocked or deleted, which diminishes their usefulness.
The methods studied by the researchers are designed to enable more persistent tracking but raise questions over whether people are aware of how much data is being collected.
The researchers, from KU Lueven in Belgium and Princeton University, wrote in their paper that they hope the findings will lead to better defenses and increased accountability “for companies deploying exotic tracking techniques.”
“The tracking mechanisms we study are advanced in that they are hard to control, hard to detect and resilient to blocking or removing,” they wrote.
Although the tracking methods have been known about for some time, the researchers showed how the methods are increasingly being used on top-tier, highly trafficked websites.
Based on some recent experience, I’m of the opinion that smartphones are about as private as a gas station bathroom. They’re full of leaks, prone to surveillance, and what security they do have comes from using really awkward keys. While there are tools available to help improve the security and privacy of smartphones, they’re generally intended for enterprise customers. No one has had a real one-stop solution: a smartphone pre-configured for privacy that anyone can use without being a cypherpunk.
That is, until now. The Blackphone is the first consumer-grade smartphone to be built explicitly for privacy. It pulls together a collection of services and software that are intended to make covering your digital assets simple—or at least more straightforward. The product of SGP Technologies, a joint venture between the cryptographic service Silent Circle and the specialty mobile hardware manufacturer Geeksphone, the Blackphone starts shipping to customers who preordered it sometime this week. It will become available for immediate purchase online shortly afterward.
Specs at a glance: Blackphone
SCREEN 4.7″ IPS HD
OS PrivatOS (Android 4.4 KitKat fork)
CPU 2GHz quad-core Nvidia Tegra 4i
RAM 1GB LPDDR3 RAM
GPU Tegra 4i GPU
STORAGE 16GB with MicroSD slot
NETWORKING 802.11b/g/n, Bluetooth 4.0 LE, GPS
PORTS Micro USB 3.0, headphones
CAMERA 8MP rear camera with AF, 5MP front camera
SIZE 137.6mm x 69.1mm x 8.38mm
BATTERY 2000 mAh
STARTING PRICE $629 unlocked
OTHER PERKS Bundled secure voice/video/text/file sharing, VPN service, and other security tools.
Dan Goodin and I got an exclusive opportunity to test Blackphone for Ars Technica in advance of its commercial availability. I visited SGP Technologies’ brand new offices in National Harbor, Maryland, to pick up mine from CEO Toby Weir-Jones; Dan got his personally delivered by CTO Jon Callas in San Francisco. We had two goals in our testing. The first was to test just how secure the Blackphone is using the tools I’d put to work recently in exploring mobile device security vulnerabilities. The second was to see if Blackphone, with all its privacy armor, was ready for the masses and capable of holding its own against other consumer handsets.
We found that Blackphone lives up to its privacy hype. During our testing in a number of scenarios, there was little if any data leakage that would give any third-party observer anything usable in terms of private information.
As far as its functionality as a consumer device goes, Blackphone still has a few rough edges. We were working with “release candidate” versions of the phone’s operating system and applications, so it would be unfair to judge their stability too harshly. But since the Google ecosystem of applications (Chrome, Google Play, and other Google-branded features) was carved from PrivatOS, a privacy-focused fork of KitKat, it may feel like a step backward for some Android users—and a breath of fresh air for others.
Google has begun removing search results in compliance with a European court ruling that search engine providers must respond to requests to delete links to outdated information about a person.
As of Thursday, when a user searches for a name via one of Google’s European domains they may see a warning displayed at the bottom of the results page saying, “Some results may have been removed under data protection law in Europe.”
The notice is shown for most name searches and not just on pages that have been affected by a removal, Google said in a FAQ.
Google said it’s working as quickly as possible to get through the queue of requests, which as of about a month ago numbered 41,000. A Google spokesman would not provide the total number of removal requests received to date.
The warning does not appear on the Google.com domain. The so-called European right to be forgotten will not apply to the Google.com domain because the .com domain is not targeted at the EU in general.
Google is responding to a May ruling by the Court of Justice of the European Union (CJEU). The court found that search engines like Google could be compelled upon request to remove results for queries that include a person’s name, if the results shown are inadequate, no longer relevant, or excessive.
Since this ruling was published Google has been working around the clock to comply, it said. “This is a complicated process because we need to assess each individual request and balance the rights of the individual to control his or her personal data with the public’s right to know and distribute information,” it said.
Individuals make removal requests by filling out an online form which will be reviewed by Google to determine whether the results include outdated information about a person’s private life.
Google will also look at whether there’s a public interest in the information remaining in its search results—for example, if it relates to financial scams, professional malpractice, criminal convictions or someone’s public conduct as a government official, Google said.
“These are difficult judgements and as a private organization, we may not be in a good position to decide on your case,” Google said, adding that assessing a case might take a while “because we have already received many such requests.” It said that individuals who disagree with its decisions can contact their local data protection authority.