CopyPastehas never been so tasty!


by anonymous

  • 0
  • 0
  • 0

Current social networks may have been present in the earliest modern humans By Kate Shaw | Published 4 days ago Current social networks may have been present in the earliest modern humans A member of the Hadza identifies her social network for researchers.

If you ever sit back and wonder what it might have been like to live in the late Pleistocene, you’re not alone. That's right about when humans emerged from a severe population bottleneck and began to expand globally. But, apparently, life back then might not have been too different than how we live today (that is, without the cars, the written language, and of course, the smartphone). In this week’s Nature, a group of researchers suggest that we share many social characteristics with humans that lived in the late Pleistocene, and that these ancient humans may have paved the way for us to cooperate with each other.

Modern human social networks share several features, whether they operate within a group of schoolchildren in San Francisco or a community of millworkers in Bulgaria. The number of social ties a person has, the probability that two of a person’s friends are also friends, and the inclination for similar people to be connected are all very regular across groups of people living very different lives in far-flung places.

So, the researchers asked, are these traits universal to all groups of humans, or are they merely byproducts of our modern world? They also wanted to understand the social network traits that allowed cooperation to develop in ancient communities.

Of course, the researchers couldn’t poll a group of ancient humans, so they had to find a community living today that has a lifestyle that closely resembles those of people who might have lived 130,000 years ago. They chose the Hadza, a group of hunter-gatherers that live in Tanzania and are very insulated from industrialization and other modern influences. The Hadza community functions much like ancient hunter-gatherer groups did, by cooperating and sharing resources like food and child care. Hadza society is organized into camps, which are taken up and abandoned regularly; the makeup of each camp also changes often, with individuals leaving one camp to join another.

The researchers visited 17 Hadza camps and surveyed 205 adults. First, they looked at individuals’ donations of honey sticks to other community members. They also asked questions like, “With whom would you like to live after this camp ends?” From the answers, the researchers constructed a model of the Hadza social network.

Many features of the hunter-gatherer network are very similar to those of modern, industrialized communities. Those who live farther away from each other are less likely to name each other as friends. Individuals who name more friends are also named more frequently by others, even among people they did not claim as their friends. People who resemble each other in some physical way tend to be connected as well; for Hadza people, similarity in age, body fat, and handgrip strength increases the likelihood of friendship.

There are also several features of the Hadza social network that may facilitate extensive cooperation. People that cooperate (in this case, by donating more honey sticks) are connected to other cooperators, while non-cooperators tend to be connected to each other. This type of clustering allows for cooperators to benefit from others’ large donations and increase in the population.

Evolutionary biologists have predicted that, for cooperation to evolve and spread, there should be more variance in cooperative behavior between groups than within groups. This is another example of clustering, and it allows for differences in the productivity and fitness of groups with different cooperation levels. And indeed, in Hadza society, there is more variance in cooperation between different camps than within camps.

From these results, two things are clear; first, that many of the universal characteristics of modern social networks also hold true for the Hadza, suggesting that these traits may have also governed the social networks of ancient humans. Second, several social features that have been predicted to facilitate the evolution and spread of cooperation are present in Hadza communities.

Clearly, ancient societies likely differed from the Hadza in many ways, but this community of hunter-gatherers may be as close as we can now get to the structure and characteristics of extinct human communities. Cooperation is one of the most heavily researched, yet poorly understood aspects of human life, and this research gives us insight into the type of community in which this phenomenon could have evolved and spread.

New Google privacy policy won't affect Apps for business, government By Jon Brodkin | Published 4 days ago

The new Google privacy policy that lets the company collect, store, and share user-specific information across Google services apparently will not apply to businesses and governments that have signed contracts to use the Google Apps productivity suite.

"Enterprise customers using Google Apps for Government, Business, or Education have individual contracts that define how we handle and store their data," according to a statement reported yesterday by The Next Web and attributed to Google Enterprise Vice President Amit Singh. "As always, Google will maintain our enterprise customers’ data in compliance with the confidentiality and security obligations provided to their domain. The new Privacy Policy does not change our contractual agreements, which have always superseded Google’s Privacy Policy for enterprise customers."

Ultimately, consumers angry about the new privacy policy have little sway since they receive Google's services for free. Large organizations paying Google for services, however, can guarantee themselves more data privacy in contractual agreements.

After concerns were raised about how Google might use information collected from government customers, the US General Services Administration (also a Google customer) issued a statement saying, "Our usage of the Google Apps solution is governed by contractual agreement with Google and our prime contractor, Unisys. The solution is compliant with all federal regulations and requirements, including those regarding privacy and data protection."

Google's government track record isn't perfect, however. Despite a signed contract, the Los Angeles Police Department has refused to switch from an in-house e-mail and productivity system to Google Apps due to security concerns. But at least on privacy and tracking, Google has apparently assured the feds that everything remains fine.Symantec suspected source code breach back in 2006 By Jon Brodkin | Published 4 days ago Symantec suspected source code breach back in 2006

Symantec suspected in 2006 that its network had been breached, but the company was unable to confirm any data exfiltration until Anonymous started talking publicly about Symantec source code earlier this month.

We noted yesterday that Symantec confirmed the theft of source code from the 2006 versions of several Norton security products and the pcAnywhere remote access tool, and that Symantec is advising customers to disable pcAnywhere until a permanent fix is issued. We followed up with Symantec last night to learn some more details.

Symantec spokesperson Cris Paden tells Ars that Symantec "investigated the incident in 2006 but our results were inconclusive."

The investigation was apparently shelved until this month, when hackers related to Anonymous claimed to have possession of Symantec source code and threatened to release it, supposedly to accompany a lawsuit claiming that Symantec tricked users into buying products with trial software versions that wrongly report security problems.

Symantec tells us that "it was not until the source code showed up again via the claims and disclosure by Anonymous that we put two and two together and realized code was indeed stolen. All of the code Anonymous has was for 2006 versions of products. As such, we focused our investigation on the time period and went back through logs and data to confirm the two incidents were related." "The code was indeed stolen from our network"

There have been some reports that code was stolen from servers maintained by India's military and intelligence departments, and that Symantec had provided the source code to India so the country's government could ensure that the software contained no malicious programs.

Symantec told Ars this isn't true, however; the theft occurred solely from Symantec's own network and servers.

"The code was indeed stolen from our network," Symantec told us. "Media reports that the code was stolen from the Indian government are based solely on the claims by Anonymous. Throughout our investigation, we have found no evidence that we ever turned over or shared any code with the Indian government. Furthermore, the documentation offered by Anonymous to back up their claims have since been shown to be faked. We're not sure how they got the code, but we've found no evidence the Indian government actually had it."

UPDATE: Symantec has sent us a further update to make clear that the original theft was not perpetrated by Anonymous, and it's not clear how Anonymous came into possession of the code. "Anonymous did NOT steal the code in 2006," Symantec tells us. "We're not sure who stole the code in 2006 and are re-investigating that incident. Furthermore, we're not sure how Anonymous came into possession of the code. They claim they stole it from the Indian government. The problem is, A) we never shared any code with the Indian government, and B), the memo Anonymous used to make the link subsequently was proven to be faked."

Symantec also tells us Anonymous did release some of the stolen code publicly, but only for the Norton Utilities product. Still, the release confirmed Anonymous did have real source code as it claimed.

Symantec has told users of pcAnywhere to disable the product for now unless they simply must use it for business purposes, in which case they should take recommended precautions to protect their systems. pcAnywhere, a remote access tool for diagnostics and helpdesk purposes, allows for PC-to-PC communication. It accounts for $20 million out of Symantec's $6.2 billion in annual revenue, the company said. The current version, 12.5, was released in November 2008. No confirmed attacks so far

Symantec said its investigation has not uncovered any attacks resulting from the source code theft. However, a white paper the company released warned that the source code leak could lead to man-in-the-middle attacks, the launching of unauthorized remote control sessions, interception of pcAnywhere traffic in businesses that use a network sniffer, or to hackers obtaining cryptographic keys that use Active Directory credentials.

So, how exactly did Symantec's network get breached? The company is keeping those details private. "We're not disclosing details of the attack in 2006 so as not to tip our hand to other attackers," Symantec tells us. "We don't have any further information to disclose other than the code was indeed stolen from our network."

However, Symantec has taken numerous steps to prevent such a breach from occurring again. "The processes we put in place were not in response to the 2006 incident but as part of our overall efforts to continuously strengthen the security of our networks," the company said.

The specific improvements include enhanced network monitoring, improved endpoint security, additional data loss protection strategy and controls, compartmentalized access to information, and improved network and server defenses protecting the source code repository. Further, Symantec removed many non-essential legacy domains, created new processes for development and security controls, and improved employee security training.

As we noted yesterday, Symantec says the Norton security products are not at any increased risk because the stolen code is largely not in use anymore, and in cases where it is in use, the out-of-the-box security settings would protect against any attacks related to the source code theft. pcAnywhere is a different story, with increased risk to users of versions 12.0, 12.1, 12.5, and previous, unsupported versions. "Customers of earlier versions of pcAnywhere are entitled to upgrade to version 12.5 at no cost," Symantec tells us.

Symantec has already released some fixes for pcAnywhere in response to Anonymous's actions, and plans further patches this week. On its breach disclosure page, Symantec says it "will continue to issue patches as needed until a new version of pcAnywhere that addresses all currently known vulnerabilities is released."

Missing ocean heat may never have been missing at all By Scott K. Johnson | Published 4 days ago Missing ocean heat may never have been missing at all An Argo float

It was a massive heist that received little attention. Several hundred trillion joules of energy were disappearing every second. Investigators suspected the deep ocean was involved, but couldn’t find any leads. There’s no need to panic, though— a fresh look at the evidence shows that the energy may never have been missing in the first place.

Most people know that greenhouse gases trap heat near the Earth, warming the planet. We fixate on records set by temperatures of the near-surface atmosphere to track the warming caused by anthropogenic greenhouse gas emissions. But the atmosphere is only part of the picture. There are other reservoirs that take up heat energy as well—most notably, the ocean. In fact, about 90 percent of the energy added by the increase in greenhouse gases has gone into the ocean.

The 2000s saw lots of La Niñas, the cold phase of the El Niño/Southern Oscillation that lowers surface temperatures. If those temperatures are your only measure of global heat content, enough La Niñas may get you thinking that there's been a slowdown in the warming trend our planet has been experiencing. You get a much different picture when you look at the Earth as a whole, though. During La Niña years, the Earth actually gains more energy than it would otherwise. Conversely, El Niño years make surface temperatures warmer but slows the rise in total energy.

That’s mainly the result of changes in cloudiness, precipitation, and storm tracks that come along with La Niña or El Niño conditions. For example, clearer skies in the tropical Pacific (La Niña) can allow more solar radiation through, whereas increased evaporation (El Niño) moves heat from the ocean to the atmosphere while boosting cloudiness.

We now have satellite networks that measure the incoming solar radiation and the outgoing infrared radiation, so we can track the changes in the planet's heat content pretty well. If the incoming solar radiation is greater than the outgoing infrared, energy was added to the system. If that energy goes into the atmosphere, we can track it using a vast network of weather stations (and satellites, as well) that enable calculations of global near-surface atmospheric temperature.

The ocean is a tougher nut to crack. We used to rely on ship-based temperature profiles for the surface ocean, but the Argo program changed that in 2003. This array of 3,000 instrumented floats measures temperature (among other things) in the upper 2 kilometers of the ocean. That's a lot more detail, but creates a significant shift in the sorts of data we have.

In 2010, Kevin Trenberth and John Fasullo (of the National Center for Atmospheric Research) published an article in Science describing a discrepancy in our accounting of Earth’s energy budget. While the satellite tracking of incoming solar radiation and outgoing infrared radiation between 2004 and 2008 clearly showed that the net addition of energy was increasing, measurements of ocean heat content showed a decline. It was sort of like seeing that your checking account balance had gone down by $1000, but finding that you had only written checks totaling $300. You know the money is gone, but where did it go?

The missing energy didn’t show up in any of the other energy budget terms we can track. That may not be as exciting as a seemingly faster-than-light neutrino, but it was a very important disparity. Trenberth and Fasullo suggested that the energy could be moving into the deep ocean, which we currently can’t monitor. Lots of modeling had previously shown that, in a warming climate, the upper ocean heat content will occasionally decline as energy moves into the deep ocean. In a 2011 paper, they also mentioned an alternate possibility: the uncertainty in our ocean heat content measurements is simply very large, which would make the discrepancy an experimental error (and therefore much less interesting).

A large source of potential error was that the switch from the ship-based measurements of ocean heat content to the Argo array entailed all kinds of difficult-to-quantify uncertainties. (New instruments that were operated differently, uneven distribution and changing density of measurement points as floats were gradually deployed, etc.) A new paper in Nature Geoscience makes headway by re-examining the ocean heat content data and accounting for that complex uncertainty.

The group’s ocean heat content record differs slightly from other analyses (just as global surface temperature series from NASA and NOAA don’t come out exactly the same), but the pivotal bit is that the uncertainty during the Argo transition period really was quite large. In fact, the difference between the ocean heat content and net total energy data is not statistically significant—it’s well within the uncertainty. That suggests that the missing energy might not be so missing.

At least one thing remains clear in all the datasets—the Earth is steadily gaining energy. Between 2001 and 2010, the amount of energy reaching the Earth has exceeded the amount leaving by an average of about 0.5 watts per square meter.

Still, it’s critically important that our energy accounting improve, and that’s a formidable task. It would be encouraging to declare the case of the missing energy “solved,” but it’s not so encouraging that our measurements are too uncertain to settle the matter. As the authors conclude, “the large inconsistencies between independent observations of Earth’s energy flows points to the need for improved understanding of the error sources and of the strengths and weaknesses of the different analysis methods, as well as further development and maintenance of measurement systems to track more accurately Earth’s energy imbalance on annual timescales."

Europe proposes a "right to be forgotten" By Peter Bright | Published 4 days ago Europe proposes a "right to be forgotten"

European Union Justice Commissioner Viviane Reding has proposed a sweeping reform of the EU's data protection rules, claiming that the proposed rules will both cost less for governments and corporations to administer and simultaneously strengthen online privacy rights.

The 1995 Data Protection Directive already gives EU citizens certain rights over their data. Organizations can process data only with consent, and only to the extent that they need to fulfil some legitimate purpose. They are also obliged to keep data up-to-date, and retain personally identifiable data for no longer than is necessary to perform the task that necessitated collection of the data in the first place. They must ensure that data is kept secure, and whenever processing of personal data is about to occur, they must notify the relevant national data protection agency.

The new proposals go further than the 1995 directive, especially in regard to the control they give citizens over their personal information. Chief among the new proposals is a "right to be forgotten" that will allow people to demand that organizations that hold their data delete that data, as long as there is no legitimate grounds to hold it. It's not 1995 anymore

The 1995 Directive was written in a largely pre-Internet era; back then, fewer than one percent of Europeans were Internet users. The proposed directive includes new requirements designed for the Internet age: EU citizens must be able to both access their data and transfer it between service providers, something that the commission argues will increase competition. Citizens will also have to give their explicit permission before companies can process their data; assumptions of permission won't be permitted, and systems will have to be private by default.

These changes are motivated in particular by the enormous quantities of personal information that social networking sites collect, and the practical difficulties that users of these services have in effectively removing that information. Reding says that the new rules "will help build trust in online services because people will be better informed about their rights and in more control of their information."

Where do the claimed savings come from? EU member states currently comply with the 1995 Directive, but each of the 27 states has interpreted and applied these rules differently. The European Commission argues that this incurs unnecessary administrative burdens on all those involved with handling data. The new mandate would create a single set of rules consistent across the entire EU, with projected savings for businesses of around €2.3 billion (US$2.98 billion) per year.

With rules streamlined throughout the trading bloc, companies would in turn only have to deal with the data protection authorities in their home country, rather than in every state in which they trade.

The new rules would also reduce the routine data protection notifications that businesses must currently send to national data protection authorities, allowing further savings of €130 million (US$169 million). However, organizations that handle data will have greater obligations in the event of data breaches: they will have to notify data protection authorities as soon as possible, preferably within 24 hours.

The rules will also apply to companies that process data abroad, if those companies serve the EU market and EU citizens.

Non-compliance will be punishable by the national data protection authorities, and they will be able to apply penalties of up to €1 million (US$1.3 million) or two percent of global annual turnover.

The proposal will undergo discussion in the European Parliament. Once the rules are adopted, they will take effect within two years. A mixed response

Industry responses to the proposals have been varied. While the harmonization and reduction of routine notifications is welcomed, some have rubbished Reding's claim that the new directive will reduce costs. For example, the Business Software Alliance's European government affairs director, Thomas Boué said, "The Commission's proposal today errs too far in the direction of imposing prescriptive mandates for how enterprises must collect, store, and manage information."

Supporters of the new proposals argue that the new directive will force companies to do things that they should already be doing. Christian Toon, head of information security at document management firm Iron Mountain, says, "Many businesses of all sizes are falling short of what is required to manage information responsibly. [...] Regardless of turnover, sector or country of operation, making sure that employee and customer information is protected should be common practice, not a reaction to new legislation."

Indeed, many of the provisions of the new directive have similar counterparts in the existing directive, and others are features of national law of some, but not all, EU member states. For example, current law gives citizens the right to have inaccurate data about them corrected. In some countries, such as the UK, this extends to a right to have that inaccurate data deleted outright. In others, such as Belgium, Germany, and Sweden, it does not. The new rules would make that right to delete universal, and would make it apply even for accurate data that is no longer necessary.

This is the so-called "right to be forgotten". The proposal does not create a right to be thrown down the memory hole or rewrite the past; news reports and similar material would be a legitimate reason to retain personal information, and this would override a demand to have data deleted. But sites like Facebook—which has had difficulties with the concept of deletion—and Google would likely be required to purge any such personal data should someone demand that they do so.

A strict "opt-in" requirement for the use of personal data could make advertising-funded services that rely on that personal data to properly target advertisements difficult to operate. The requirement to report breaches in 24 hours might also be difficult to fulfil, since it can take much longer for a breach to even be detected.

The new rules would create an interesting predicament for a company like Google. The search giant has just announced its new privacy policy that enables it to collect and aggregate data from almost all Google services, with no provision to opt out or restrict the processing the company performs to private data. This is the opposite of the "private by default" policy that the proposed rules require, and the only way that Google users will attain that privacy is by not creating or using a Google account.

When asked about the impact of the new rules, a Google spokesperson told Ars: "We support simplifying privacy rules in Europe to both protect consumers online and stimulate economic growth. It is possible to have simple rules that do both. We look forward to debating the proposals over the coming months."

But still, this is not a fundamental shift in the demands placed on data-holding organizations. They must already be able to identify personal data, they must already store it securely, and they must already be able to provide it on-demand. Doing these things requires that systems are designed appropriately, and this can certainly incur costs—but they are costs that should already exist today.

Google's new privacy policy could anger FTC By Casey Johnston | Published 4 days ago

Google announced on Monday that it would be enacting a new privacy policy that, when customers agree to it, will allow the company to collect and store information across all of its services. Not only that, but Google will share information gathered across those services in order to "maintain, protect and improve" the services, but also to target search results and ads for each user. There is no way to opt out of the information-sharing aside from deleting your entire account and saying goodbye to your Gmail, YouTube videos, and Calendar, among other things. Users may feel that this is a backhanded gesture on Google's part, but the new privacy policy may also raise issues with the company's agreement with the FTC.

Google has been able to see and use its users' information for a long time, as in targeted ads displayed alongside Gmail. With the new privacy policy, Google will store information from all of the services a person might use, including location and application information from smartphones, Google Wallet, Google+, your search and viewing history in YouTube and Maps, books you browse, RSS feeds you read, and your Blogger posts marked "private." The company can then share that information across all of those services.

All of this information has been passing in front of Google's eyes in a glittering stream for years, but now they're putting a bucket underneath it. While information shared across services may make for a more integrated experience, it also creates a more complete picture of users that can be tacked onto the advertising dartboard.

Google isn't the first to dig its fingers into service information and receive blowback. Facebook has been tiptoeing over that line for years, and occasionally returning to the other side, recanting. But Facebook is a service predicated on sharing information with others, and the Beacon marketing fiasco aside, there's not as much there that users didn't put there themselves.

Google, on the other hand, has made itself essential with free services like YouTube and Gmail. The cost of dropping off Facebook is increased difficulty in stalking your peers, plus nagging questions about why you don't have Facebook. The cost of dropping off Google is, often as not, moving your entire online system for managing communication and information in multiple media elsewhere.

Privacy groups such as Common Sense Media are concerned about users' inability to opt out of the information collection and sharing. "Even if the company believes that tracking users across all platforms improves their services, consumers should still have the option to opt out," Common Sense Media CEO James Steyer said in a statement. Steyer noted that the ability to opt out would be particularly important for kids and teens who use Google's services; the default setting, he says, should be "opt-in" if you're interested in the integrated experience Google is selling with its collected information.

Google's new privacy policy is not yet in effect, but the company is set to adopt it on March 1. But they may not get that far: Marc Rotenberg, executive director of the Electronic Privacy Information Center, told Ars that opting users into services violates Google's consent order with the FTC. "Google is not allowed, under the settlement, to opt users in. If Google goes forward, they may be hit with serious monetary penalties," said Rotenberg.

A Google spokesperson told Ars, however, that the consent order with FTC concerns the company's sharing of information with third parties, which the new privacy policy will not affect. But Rotenberg argued that the new information sharing practices resemble too closely "Google's attempt to use the data of Gmail subscribers to launch Buzz without consent," which prompted FTC to create the consent order in the first place.

Google has also released a new Terms of Service alongside its privacy policy. In one section, the company states in all caps, "Other than as expressly set out in these terms or additional terms, neither Google nor its suppliers or distributors make any specific promises about the services. For example, we don't make any commitments about… the specific function of the services, or their reliability, availability, or ability to meet your needs. We provide the services 'as is.'"

It's easy to think that because a free service serves you now, it will serve you forever. And it's hard to feel like you aren't owed something by Google and its services. By using Google's services, you do pay the company's bills after all. Unlike software you buy outright, you can pull support for Google if disagree with how they operate, but always at the cost of shaking your dependency.

Judge blasts "unlawful invasions of privacy" by "rogue" P2P attorney By Nate Anderson | Published 4 days ago Judge blasts "unlawful invasions of privacy" by "rogue" P2P attorney

Last September, a federal judge in Texas blasted the “staggering chutzpah” of P2P attorney Evan Stone (seen above in better times), who had issued subpoenas to Internet services providers in a porn film case without the court's permission. Stone was hit with $10,000 in sanctions after lawyers from Public Citizen and the EFF brought the matter to the judge's attention. But there was a problem: Stone didn't pay.

Two days after he was supposed to have coughed up to $10,000, Stone at last filed a motion to stay the sanctions against him. Yesterday, the judge overseeing the case made clear that the sanctions would stay and that, in addition, Stone needs to pay an additional $22,000 to Public Citizen and EFF to cover attorneys' fees. Should he not pay, the $32,000 total will increase by $500 a day until he does.

“Stone egregiously overstepped his boundaries and invaded the Doe Defendants' privacy without authorization,” wrote Judge David Godbey. "His actions in this case demonstrate that today he remains undeterred. The public has an interest in being free from unlawful invasions of privacy by a rogue attorney."

As if the money weren't bad enough, Stone was (again) ordered to file a copy of the sanctions against him in every state and federal case in which he is an attorney. Though Stone complained this would be bad for business and that he hadn't acted in bad faith, the judge retorted, "It may be the case, as Stone argues, that filing the Sanctions Order will result in damage to his professional goodwill. But it is Stone's actions, and not the resulting Sanctions Order, that do damage to his professional goodwill.”

Stone did not respond to our request for comment.

Symantec: Anonymous stole source code, users should disable pcAnywhere By Jon Brodkin | Published 4 days ago

Symantec has confirmed that the hacker group Anonymous stole source code from the 2006 versions of several Norton security products and the pcAnywhere remote access tool.

Although Symantec says the theft actually occurred in 2006, the issue did not come to light until this month when hackers related to Anonymous said they had the source code and would release it publicly. Users of the Norton products in question are not at any increased risk of attack because of the age of the source code and security improvements made in the years since the breach, but the vendor acknowledged on Tuesday night that "Customers of Symantec's pcAnywhere have increased risk as a result of this incident."

Symantec released a patch fixing three vulnerabilities in pcAnywhere version 12.5 (the current version) on Monday, and said it will continue issuing patches "until a new version of pcAnywhere that addresses all currently known vulnerabilities is released."

Symantec pointed customers to a white paper that recommends disabling pcAnywhere, unless it is needed for business-critical use, because malicious users with access to the source code could identify vulnerabilities and launch new exploits. "At this time, Symantec recommends disabling the product until Symantec releases a final set of software updates that resolve currently known vulnerability risks," the company said. "For customers that require pcAnywhere for business critical purposes, it is recommended that customers understand the current risks, ensure pcAnywhere 12.5 is installed, apply all relevant patches as they are released, and follow the general security best practices discussed herein."

As for Norton, Symantec said the source code stolen was from the 2006 versions of Norton Antivirus Corporate Edition, Norton Internet Security, and Norton SystemWorks. Earlier this month, Symantec said no products were at risk, but changed its message regarding pcAnywhere after further investigation.

US has already flexed cyberwar muscle, says former NSA director By Sean Gallagher | Published 4 days ago

In an interview with Reuters, former National Security Agency Director Mike McConnell claimed that the US has already used cyber attacks against an adversary successfully. And it's just a matter of time before someone unleashes cyber attacks on US critical infrastructure, he warned.

McConnell didn't spell out who exactly the US had attacked with its offensive capabilities. However, reports that security experts have "all but confirmed" that the US was at least partially behind the Stuxnet worm that damaged Iran's efforts to enrich uranium, working in concert with Israel.

Now a vice-chairman at Booz Allen Hamilton and leading the firm's cyber work, McConnell is on a campaign to raise awareness of the threat of such attacks being used against the US. "There will be a thousand voices on what is the right thing to do," he told Reuters. And, he added, it will likely take a crisis to achieve consensus—a consensus that would arrive too late.

Booz Allen has a major footprint in the Defense Department, and recently launched a "Cyber Solutions Network" service targeted at helping commercial and government clients build defenses against the sorts of network penetration, exploitation and espionage that McConnell says US intelligence and military are capable of conducting. According to McConnell, the US, Britain, and Russia all have well-developed capabilities when it comes to gaining access to electronic communications such as e-mail without being detected. But he added that the NSA and other agencies conducting surveillance of emerging threats on the Internet are currently "powerless to do a thing" to assist private companies outside of the defense industrial base when they discover threats, "other than to issue a report."

A bill approved by the House Permanent Select Committee on Intelligence in December, the Cyber Intelligence Sharing and Protection Act (H.R. 3523), would give intelligence agencies permission to share classified information on cyber threats with "approved American companies." It doesn't, however, authorize intelligence and defense agencies to provide protection against those attacks. A broad cyber-security bill is expected to be introduced in the Senate later this year.

A quantum speed limit: how fast does quantum information flow through a lattice? By Matthew Francis | Published 5 days ago A quantum speed limit: how fast does quantum information flow through a lattice?

The speed of light is the cosmic speed limit, according to physicists' best understanding: no information can be carried at a greater rate, no matter what method is used. But an analogous speed limit seems to exist within materials, where the interactions between particles are typically very short-range and motion is far slower than light-speed. A new set of experiments and simulations by Marc Cheneau and colleagues have identified this maximum velocity, which has implications for quantum entanglement and quantum computations.

In non-relativistic systems, where particle speeds are much less than the speed of light, interactions still occur very quickly, and they often involve lots of particles. As a result, measuring the speed of interactions within materials has been difficult. The theoretical speed limit is set by the Lieb-Robinson bound, which describes how a change in one part of a system propagates through the rest of the material. In this new study, the Lieb-Robinson bound was quantified experimentally for the first time, using a real quantum gas.

Within a lattice (such as a crystalline solid), a particle primarily interacts with its nearest neighbors. For example, the spin of an electron in a magnetically susceptible material depends mainly on the orientation of the spins of its neighbors on each side. Flipping one electron's spin will affect the electrons nearest to it.

But the effect also propagates throughout the rest of the material—other spins may themselves flip, or experience a change in energy resulting from the original electron's behavior. These longer range interactions can be swamped out by extraneous effects, like lattice vibrations. But it's possible to register them in very cold systems, as lattice vibrations die out near absolute zero.

In the experiment described in Nature, the researchers begin with a simple one-dimensional quantum gas consisting of atoms in an optical lattice. This type of trap is made by crossing laser beams so that they interfere and create a standing-wave pattern; by adjusting the power output of the lasers, the trap can be made deeper or shallower. Optical lattices are much simpler than crystal lattices, as the atoms are not involved in chemical bonding.

By rapidly increasing the depth of the optical lattice, the researchers create what is known as a quenched system. You can think of this as analogous to plunging a hot forged piece of metal into water to cool it quickly. Before the change, the atoms are in equilibrium; after the change, they are highly excited.

As in many other strongly interacting systems, these excitations take the form of quasiparticles that can travel through the lattice. Neighboring quasiparticles begin with their quantum states entangled, but propagate rapidly in opposite directions down the lattice. As in all entangled systems, the states of the quasiparticles remain correlated even as the separation between them grows. By measuring the distance between the excitations as a function of time, the real velocity of the quasiparticles' propagation can be measured. As measured, it is more than twice the speed of sound in the system.

The specific lattice strengths used in the experiment make it difficult to do direct comparisons to theory, so the researchers were only able to use a first-principles numerical model (as opposed to a detailed theoretical calculation). To phrase it another way, the velocity they measured cannot currently be derived directly from fundamental quantum physics.

It's difficult to generalize these results as well. Systems with other physical properties will have different maximum speeds, just as light moves at different speeds depending on the medium; the researchers found things changed even within a simple one-dimensional lattice whenever they varied the interaction strength between the atoms.

However, showing that excitations must have a consistent maximum speed is a groundbreaking result. As with relativity, this speed limit creates a type of "light cone" that separates regions where interactions can occur and where they are forbidden. This has profound implications for the study of quantum entanglement, and thus most forms of quantum computing.

"We're just like YouTube," Megaupload lawyer tells Ars By Nate Anderson | Published 5 days ago "We're just like YouTube," Megaupload lawyer tells Ars

Megaupload's US attorney, Ira Rothken, has a succinct description of the US government case against his client: "wrong on the facts and wrong on the law."

The week has been a busy one for Rothken, a San Francisco Internet law attorney who has previously represented sites like isoHunt and video game studios like Pandemic. When I call, he's eating crab cakes and waiting for yet another meeting to start, but he has plenty of time to attack the government's handling of the Megaupload case.

In Rothken's words, the government is acting like a "copyright extremist" by taking down one of the world's largest cloud storage services "without any notice or chance for Megaupload to be heard in a court of law." The result is both "offensive to the rights of Megaupload but also to the rights of millions of consumers worldwide" who stored personal data with the service.

The best way to look at Megaupload, he says, is through the lens of Viacom's $1 billion lawsuit against YouTube—an ongoing civil case which Viacom lost at trial. (It is being appealed.)

For instance, Viacom dug up an early e-mail from a YouTube co-founder to another co-founder saying: "Please stop putting stolen videos on the site. We’re going to have a tough time defending the fact that we’re not liable for the copyrighted material on the site because we didn’t put it up when one of the co-founders is blatantly stealing content from other sites and trying to get everyone to see it."

"Whatever allegations that they can make against Megaupload they could have made against YouTube," he says of the government. "And YouTube prevailed!" (Rothken made a similar case when he represented search engine isoHunt in 2010, saying it was just like Google.) Ira Rothken Ira Rothken

Under this view, Megaupload should have been served with DMCA takedown notices (the site did have a registered DMCA agent, as required by law, though not until 2009). If rightsholders believed that was insufficient, they should have conferred with Megaupload's US counsel (the company has retained US attorneys for some time before the current action). And if that wasn't satisfactory, a civil copyright infringement lawsuit should have been filed, one that would not have taken the site down first and asked questions later.

Instead, the government's willingness to pursue the case as an international racketeering charge meant "essentially only sticking up for one side of the copyright vs. technology debate." The result, Rothken says, is "terrible chilling effect it's having on Internet innovators" who feature cloud storage components to their business.

The US Department of Justice released a lengthy statement to the press detailing the charges against Megaupload, while New Zealand police publicly offered crazy details of their bid to arrest Megaupload founder Kim Dotcom (born Kim Schmitz). "Police arrived in two marked Police helicopters," said New Zealand Detective Inspector Grant Wormald at a press conference. "Despite our staff clearly identifying themselves, Mr. Dotcom retreated into the house and activated a number of electronic locking mechanisms. While Police neutralised these locks he then further barricaded himself into a safe room within the house which officers had to cut their way into. Once they gained entry into this room they found Mr Dotcom near a firearm which had the appearance of a shortened shotgun. It was definitely not as simple as knocking at the front door."

"James Bond tactics with helicopters and weaponry have a detrimental effect on society as a whole."

This sort of thing makes Rothken furious. Using "James Bond tactics with helicopters and weaponry, and breaking into homes over what is apparently a philosophical debate over the balance between copyright protection and the freedom to innovate, are heavy-handed tactics, are over-aggressive, and have a detrimental effect on society as a whole," he said. In addition, the raid was a reminder that bills like the Stop Online Piracy Act "ought not to ever be passed, because these tactics [the helicopters, etc.] are so offensive that if you take the shackles off of government, it may lead to more abuse, more aggression."

Rothken also suggested that the timing of the raid was suspicious; "over a two-year period, they happened to pick the one week where SOPA started going south."

I asked about specific allegations in the indictment, including the government's quotation of internal e-mails showing employees asking for and uploading copyrighted material. Rothken wouldn't address any specifics, but he did claim the government had engaged in some highly selective editing, choosing a few "bad communications" out of terabytes of seized data. It's as if one were to "judge the character of a person by the three worst things they ever did as a college student and ignored all the things they did as an adult."

For now, the case remains in New Zealand, where questions of bail and then extradition are being handled by local courts. Though the entire case could take a long while to wind its way to completion, Rothken concludes, "Megaupload believes strongly it's going to prevail." Spin room

This is not a view that convinces either the US government or major copyright holders. Michael Fricklas, general counsel of Viacom and the man overseeing the company's litigation against YouTube, finds the Megaupload/YouTube comparison to be "quite a spin."

"Kim Dotcom paid uploaders who were also in it for the money, and knew about lots of specific infringement."

"The indictment shows that Kim Dotcom was deeply involved in every aspect of the site, designed the site to encourage infringement, helped specific users find pirated content and improve the piracy experience, paid uploaders who were also in it for money, and knew about lots of very specific infringement," he told me this afternoon. "Thus, even under YouTube's extreme view of the DMCA protections, the DMCA would provide no defense. Criminal and civil proceedings each have a different set of processes and outcomes, and are certainly not mutually exclusive. There are many times—such as in the case of Megaupload—where it is entirely appropriate for both types of action to take place."

A Department of Justice spokesperson told me that the government only goes after groups that show enough evidence of "willful" criminal conduct to take them beyond the realm of merely civil litigation, and that Megaupload certainly qualifies thanks to the same factors mentioned by Fricklas.

As for the timing of the arrests, the DOJ says it had nothing to do with the SOPA debate. After nearly two years of investigation involving many different countries, the indictment against Megaupload was returned by the grand jury investigating the group on January 5 of this year—almost two weeks before the big anti-SOPA protests captured the Web's attention. The arrests themselves—complete with their police helicopters and safe room in-breaking—took place shortly after New Zealand police obtained arrest warrants.

What the case may show more than anything else is the sheer disparity between the dueling worldviews involved. Was the Megaupload takedown an offensive assault on innovators who may have, on a few occasions, done something a tiny bit naughty—or was it a massive Mega-conspiracy worthy of an international police takedown?

Jumping spiders pounce using blurry green images of prey By John Timmer | Published 4 days ago Jumping spiders pounce using blurry green images of prey The four forward-facing eyes of a jumping spider.

A picture is two-dimensional and yet, when we look at it, we perceive depth. A number of visual cues tip us off to the relative distances of items in a photo. One of them is focus; if something is out-of-focus, we know it's not going to be the same distance away as something that appears sharp. To date, however, no animals were know to use focus as their primary means of estimating depth. But a paper in today's issue of Science provides some compelling evidence that this approach is the primary method used by jumping spiders.

Jumping spiders, as their name implies, don't capture their prey in webs. Instead, they make sudden leaps to reach and rapidly disable their targets. As you might imagine, that requires very accurate depth perception. Get the distance wrong and the spider could come up short of its prey, allowing it to escape.

Given that the spiders have two sets of eyes facing forward, depth perception wouldn't seem to be a problem. However, researchers have blocked the vision in the pair of outside eyes—technically, the anterior lateral eyes—and found that this doesn't impact depth perception at all. (You may now pause for a moment to envision spiders with tiny blindfolds on that cover two of their four forward-facing eyes.)

The spiders also remain motionless prior to striking, which means that they can't use the difference in perspective provided by motion to judge distances. Finally, the principal, forward-facing eyes don't have the sort of distinct-but-overlapping visual field that lets some other organisms judge distance. In short, we know a number of different methods for organisms to judge distance, and spiders appear to use none of them.

So what do they do? The researchers began to suspect that they might use out-of-focus images to figure out where their prey resides. Their reason for suspecting this is some studies people have done with the spiders's principal eyes. These focus light onto a retina composed of several distinct layers, with different wavelengths of light being in-focus on different layers (UV and blue on the top layer, redder lights on the deeper ones). The odd thing about this structure is that a green sensitive pigment was present in both the layer where it would be focused, and on a layer where it would only image items out of focus.

The obvious consequence of this system is that the spiders's ability to judge depth-of-field would be entirely dependent upon the visual scene containing some green light. So the researchers set up some flies, illuminated the scene in either red or green light, and then set the spiders loose. The spiders managed to make an accurate leap when the prey was illuminated in green, but when red light was used, the spiders generally came up short. In fact, they actually had to make a second leap in a few of the tests.

Although these results are pretty clear cut, the biggest weakness is that there were only a limited number of tests—four each for green and two different intensities of red light. The authors say that further work needs to go into characterizing the system.

But they also suggest that the further work would be worthwhile. The jumping spiders are the first animals that appear to be using this "depth through defocus" as their primary system of vision, but there are people trying to develop the approach for use in robotic systems. The authors think that a better understanding of a biological approach, honed through millions of years of evolution, might help the engineers out.

Add A Comment: