We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!
KentuckyFC writes Statisticians have long thought it impossible to tell cause and effect apart using observational data. The problem is to take two sets of measurements that are correlated, say X and Y, and to find out if X caused Y or Y caused X. That's straightforward with a controlled experiment in which one variable can be held constant to see how this influences the other. Take for example, a correlation between wind speed and the rotation speed of a wind turbine. Observational data gives no clue about cause and effect but an experiment that holds the wind speed constant while measuring the speed of the turbine, and vice versa, would soon give an answer. But in the last couple of years, statisticians have developed a technique that can tease apart cause and effect from the observational data alone. It is based on the idea that any set of measurements always contain noise. However, the noise in the cause variable can influence the effect but not the other way round. So the noise in the effect dataset is always more complex than the noise in the cause dataset. The new statistical test, known as the additive noise model, is designed to find this asymmetry. Now statisticians have tested the model on 88 sets of cause-and-effect data, ranging from altitude and temperature measurements at German weather stations to the correlation between rent and apartment size in student accommodation.The results suggest that the additive noise model can tease apart cause and effect correctly in up to 80 per cent of the cases (provided there are no confounding factors or selection effects). That's a useful new trick in a statistician's armoury, particularly in areas of science where controlled experiments are expensive, unethical or practically impossible.
130 comments | 2 days ago
MTorrice writes The 2008 recession hammered the U.S. auto industry, driving down sales of 2009 models to levels 35% lower than those before the economic slump. A new study has found that because sales of new vehicles slowed, the average age of the U.S. fleet climbed more than expected, increasing the rate of air pollutants released by the fleet.
In 2013, the researchers studied the emissions of more than 68,000 vehicles on the roads in three cities—Los Angeles, Denver, and Tulsa. They calculated the amount of pollution released per kilogram of fuel burned for the 2013 fleet and compared the rates to those that would have occurred if the 2013 fleet had the same age distribution as the prerecession fleet. For the three cities, carbon monoxide emissions were greater by 17 to 29%, hydrocarbons by 9 to 14%, nitrogen oxide emissions by 27 to 30%, and ammonia by 7 to 16%.
176 comments | about a week ago
HughPickens.com writes Jason Kane reports at PBS that emergency treatments delivered in ambulances that offer "Advanced Life Support" for cardiac arrest may be linked to more death, comas and brain damage than those providing "Basic Life Support." "They're taking a lot of time in the field to perform interventions that don't seem to be as effective in that environment," says Prachi Sanghavi. "Of course, these are treatments we know are good in the emergency room, but they've been pushed into the field without really being tested and the field is a much different environment." The study suggests that high-tech equipment and sophisticated treatment techniques may distract from what's most important during cardiac arrest — transporting a critically ill patient to the hospital quickly.
Basic Life Support (BLS) ambulances stick to simpler techniques, like chest compressions, basic defibrillation and hand-pumped ventilation bags to assist with breathing with more emphasis placed on getting the patient to the hospital as soon as possible. Survival rates for out-of-hospital cardiac arrest patients are extremely low regardless of the ambulance type with roughly 90 percent of the 380,000 patients who experience cardiac arrest outside of a hospital each year not surviving to hospital discharge. But researchers found that 90 days after hospitalization, patients treated in BLS ambulances were 50 percent more likely to survive than their counterparts treated with ALS. Not everyone is convinced of the conclusions. "They've done as much as they possibly can with the existing data but I'm not sure that I'm convinced they have solved all of the selection biases," says Judith R. Lave. "I would say that it should be taken as more of an indication that there may be some very significant problems here."
112 comments | about three weeks ago
An anonymous reader writes Nielsen is going to start studying the streaming behavior of online viewers for the first time. Netflix has never released detailed viewership data, but Nielsen says they have developed a way for its rating meters to track shows by identifying their audio. From the article: "Soon Nielsen, the standard-bearer for TV ratings, may change that. The TV ratings company revealed to the Wall Street Journal that it's planning to begin tracking viewership of online video services like Netflix and Amazon Prime Instant Video in December by analyzing the audio of shows that are being streamed. The new ratings will come with a lot of caveats—they won't track mobile devices and won't take into account Netflix's large global reach—but they will provide a sense for the first time which Netflix shows are the most popular. And if the rest of the media world latches onto these new ratings as a standard, Netflix won't be able to ignore them."
55 comments | about 1 month ago
MojoKid writes Last week, NVIDIA offered information regarding its Android Lollipop update for the SHIELD Tablet and also revealed a new game bundle for it. This week, NVIDIA gave members of the press early access to the Lollipop update and it will also be rolling out to the general public sometime later today. Some of the changes are subtle, but others are more significant and definitely give the tablet a different look and feel over the original Android KitKat release. Android Lollipop introduces a new "material design" that further flattens out the look of the OS. Google seems to have taken a more minimalist approach as everything from the keyboard to the settings menus have been cleaned up considerably. Many parts of the interface don't have any markings except for the absolute necessities. While the OS definitely feels more fluid and responsive, the default look isn't always better, depending on your personal view. The app tray for example has a plain, white background which looks kind of jarring if you've using a colorful background. And finding the proper touch points for things like a settings menu or clearing notifications isn't always clear. Performance-wise, NVIDIA's Shield Tablet showed significantly better performance on Lollipop for general compute tasks in benchmarks like Mobile XPRT but lagged behind Kit Kat in graphics performance slightly, which could be attributed to driver optimization.
57 comments | about a month ago
HughPickens.com writes: Every year the works of thousands of authors enter the public domain, but only a small percentage of these end up being widely available. So how do organizations such as Project Gutenberg choose which works to focus on? Allen Riddell has developed an algorithm that automatically generates an independent ranking of notable authors for any given year. It is then a simple task to pick the works to focus on or to spot notable omissions from the past. Riddell's approach is to look at what kind of public domain content the world has focused on in the past and then use this as a guide to find content that people are likely to focus on in the future.
Riddell's algorithm begins with the Wikipedia entries of all authors in the English language edition (PDF)—more than a million of them. His algorithm extracts information such as the article length, article age, estimated views per day, time elapsed since last revision, and so on. This produces a "public domain ranking" of all the authors that appear on Wikipedia. For example, the author Virginia Woolf has a ranking of 1,081 out of 1,011,304 while the Italian painter Giuseppe Amisani, who died in the same year as Woolf, has a ranking of 580,363. So Riddell's new ranking clearly suggests that organizations like Project Gutenberg should focus more on digitizing Woolf's work than Amisani's. Of the individuals who died in 1965 and whose work will enter the public domain next January in many parts of the world, the new algorithm picks out TS Eliot as the most highly ranked individual. Others highly ranked include Somerset Maugham, Winston Churchill, and Malcolm X.
55 comments | about a month ago
jones_supa writes We all are aware of various chirping and whining sounds that electronics can produce. Modern graphics cards often suffer from these kind of problems in form of coil whine. But how widespread is it really? Hardware Canucks put 50 new graphics cards side-by-side to compare them solely from the perspective of subjective acoustic disturbance. NVIDIA's reference platforms tended to be quite well behaved, just like their board partners' custom designs. The same can't be said about AMD since their reference R9 290X and R9 290 should be avoided if you're at all concerned about squealing or any other odd noise a GPU can make. However the custom Radeon-branded SKUs should usually be a safe choice. While the amount and intensity of coil whine largely seems to boil down to luck of the draw, at least most board partners are quite friendly regarding their return policies concerning it.
111 comments | about a month ago
An anonymous reader writes Scientists from Los Alamos National Laboratory have used Wikipedia logs as a data source for forecasting disease spread. The team was able to successfully monitor influenza in the United States, Poland, Japan, and Thailand, dengue fever in Brazil and Thailand, and tuberculosis in China and Thailand. The team was also able to forecast all but one of these, tuberculosis in China, at least 28 days in advance.
61 comments | about a month ago
Lucas123 writes Backblaze, which has taken to publishing data on hard drive failure rates in its data center, has just released data from a new study of nearly 40,000 spindles revealing what it said are the top 5 SMART (Self-Monitoring, Analysis and Reporting Technology) values that correlate most closely with impending drive failures. The study also revealed that many SMART values that one would innately consider related to drive failures, actually don't relate it it at all. Gleb Budman, CEO of Backblaze, said the problem is that the industry has created vendor specific values, so that a stat related to one drive and manufacturer may not relate to another. "SMART 1 might seem correlated to drive failure rates, but actually it's more of an indication that different drive vendors are using it themselves for different things," Budman said. "Seagate wants to track something, but only they know what that is. Western Digital uses SMART for something else — neither will tell you what it is."
142 comments | about a month ago
KentuckyFC writes During the Chinese New Year earlier this year, some 3.6 billion people traveled across China making it the largest seasonal migration on Earth. These kinds of mass movements have always been hard to study in detail. But the Chinese web services company Baidu has managed it using a mapping app that tracked the location of 200 million smartphone users during the New Year period. The latest analysis of this data shows just how vast this mass migration is. For example, over 2 million people left the Guandong province of China and returned just a few days later--that's equivalent to the entire population of Chicago upping sticks. The work shows how easy it is to track the movement of large numbers of people with current technology--assuming they are willing to allow their data to be used in this way.
48 comments | about a month ago
bmahersciwriter writes Citation is the common way that scientists nod to the important and foundational work that preceded their own and the number of times a particular paper is cited is often used as a rough measure of its impact. So what are the most highly cited papers in the past century plus of scientific research? Is it the determination of DNA's structure? The identification of rapid expansion in the Universe? No. The top 100 most cited papers are actually a motley crew of methods, data resources and software tools that through usability, practicality and a little bit of luck have propelled them to the top of an enormous corpus of scientific literature.
81 comments | about 1 month ago
An anonymous reader writes Lenovo is the latest tech company to enter the fitness tracker market with its Smartband SW-B100 device. "It can record calories burnt, steps taken and a user's heartrate, in addition to syncing with a smartphone through an app to provide more complete health data. Users can also customize notifications and reminders on the smartband, and even use it to unlock a Windows PC without typing in the password, according to the product page."
51 comments | about 2 months ago
jones_supa writes: Microsoft has just released Windows 10 TP build 9860. Along with the new release, Microsoft is introducing an interesting cadence option for how quickly you will receive new builds. The "ring progression" goes from development, to testing, to release. By being in the slow cadence, you will get more stable builds, but they will arrive less often. By choosing the fast option, it allows you to receive the build on the same day that it is released. As a quick stats update, to date Microsoft has received over 250,000 pieces of feedback through the Windows Feedback tool, 25,381 community forum posts, and 641 suggestions in the Windows Suggestion Box.
112 comments | about 2 months ago
itwbennett (1594911) writes A partnership between TV measurement company Nielsen and analytics provider Adobe, announced today, will let broadcasters see (in aggregate and anonymized) how people interact with digital video between devices — for example if you begin watching a show on Netflix on your laptop, then switch to a Roku set-top box to finish it. The information learned will help broadcasters decide what to charge advertisers, and deliver targeted ads to viewers. Broadcasters can use the new Nielsen Digital Content Ratings, as they're called, beginning early next year. Early users include ESPN, Sony Pictures Television, Turner Broadcasting and Viacom.
126 comments | about 2 months ago
jones_supa writes: Two weeks in, and already a million people have tried out Windows 10 Technical Preview, reports Microsoft, along with a nice stack of other stats and feedback. Only 36% of installations are occurring inside a virtual machine. 68% of Windows 10 Technical Preview users are launching more than seven apps per day, with somewhere around 25% of testers using Windows 10 as their daily driver (26 app launches or more per day). With the help of Windows 10's built-in feedback tool, thousands of testers have made it very clear that Microsoft's new OS still has lots of irksome bugs and misses many much-needed features. ExtremeTech has posted an interesting list of the most popular gripes received, them mostly being various GUI endurances. What has your experience been with the Technical Preview?
147 comments | about 2 months ago
Bennett Haselton writes As commenters continue to blame Jennifer Lawrence and other celebrities for allowing their nude photos to be stolen, there is only one rebuttal to the victim-blaming which actually makes sense: that for the celebrities taking their nude selfies, the probable benefits of their actions outweighed the probable negatives. Most of the other rebuttals being offered, are logically incoherent, and, as such, are not likely to change the minds of the victim-blamers. Read below to see what Bennett has to say.
622 comments | about 2 months ago
HughPickens.com writes Randy Olson, a Computer Science grad student who works with data visualizations, writes about seven of the biggest factors that predict what makes for a long term stable marriage in America. Olson took the results of a study that polled thousands of recently married and divorced Americans and and asked them dozens of questions about their marriage (PDF): How long they were dating, how long they were engaged, etc. After running this data through a multivariate model, the authors were able to calculate the factors that best predicted whether a marriage would end in divorce. "What struck me about this study is that it basically laid out what makes for a stable marriage in the US," writes Olson. Here are some of the biggest factors:
How long you were dating: (Couples who dated 1-2 years before their engagement were 20% less likely to end up divorced than couples who dated less than a year before getting engaged. Couples who dated 3 years or more are 39% less likely to get divorced.); How much money you make: (The more money you and your partner make, the less likely you are to ultimately file for divorce. Couples who earn $125K per year are 51% less likely to divorce than couples making 0 — 25k); How often you go to church: (Couples who never go to church are 2x more likely to divorce than regular churchgoers.); Your attitude toward your partner: (Men are 1.5x more likely to end up divorced when they care more about their partner's looks, and women are 1.6x more likely to end up divorced when they care more about their partner's wealth.); How many people attended the wedding: ("Crazy enough, your wedding ceremony has a huge impact on the long-term stability of your marriage. Perhaps the biggest factor is how many people attend your wedding: Couples who elope are 12.5x more likely to end up divorced than couples who get married at a wedding with 200+ people."); How much you spent on the wedding: (The more you spend on your wedding, the more likely you'll end up divorced.); Whether you had a honeymoon: (Couples who had a honeymoon are 41% less likely to divorce than those who had no honeymoon)
Of course correlation is not causation. For example, expensive weddings may simply attract the kind of immature and narcissistic people who are less likely to sustain a successful marriage and such people might end up getting divorced even if they married cheaply. But "the particularly scary part here is that the average cost of a wedding in the U.S. is well over $30,000," says Olson, "which doesn't bode well for the future of American marriages."
447 comments | about 2 months ago
theodp (442580) writes "Well, the College Board has posted the 2014 AP Computer Science Test scores. So, before the press rushes out another set of Not-One-Girl-In-Wyoming-Took-an-AP-CS-Exam stories, let's point out that no Wyoming students of either gender took an AP CS exam again in 2014 (.xlsx). At the overall level, the final numbers have changed somewhat (back-of-the-Excel-envelope calculations, no warranty expressed or implied!), but tell pretty much the same story as the preliminary figures — the number of overall AP CS test takers increased, while pass rates decreased despite efforts to cherry pick students with a high likelihood of success. What is kind of surprising is how little the test numbers budged for most states — only 8 states managed to add more than 100 girls to the AP CS test taker rolls — despite the PR push by the tech giants, including Microsoft, Google, and, Facebook. Also worth noting are some big percentage decreases at the top end of the score segments (5 and 4), and still-way-too-wide gaps that exist between the score distributions of the College Board's various ethnic segments (more back of the envelope calcs). If there's a Data Scientist in the house, AP CS exam figures grabbed from the College Board's Excel 2013 and 2014 worksheets can be found here (Google Sheets) together with the (unwalkedthrough) VBA code that was used to collect it. Post your insight (and code/data fixes) in the comments!"
144 comments | about 2 months ago
KentuckyFC writes Since 2001, crowdfunding sites have raised almost $3 billion and in 2012 alone, successfully funded more than 1 million projects. But while many projects succeed, far more fail. The reasons for failure are varied and many but one of the most commonly cited is the inability to match a project with suitable investors. Now a group of researchers from Yahoo Labs and the University of Cambridge have mined data from Kickstarter to discover how investors choose projects to back. They studied over 1000 projects in the US funded by over 80,000 investors. They conclude that there are two types of backers: occasional investors who tend to back arts-related projects, probably because of some kind of social connection to the proposers; and frequent investors who have a much more stringent set of criteria. Frequent investors tend to fund projects that are well-managed, have high pledging goals, are global, grow quickly, and match their interests. The team is now working on a website that will create a list of the Twitter handles of potential investors given the URL of a Kickstarter project.
20 comments | about 2 months ago
whoever57 writes: All official numbers for fuel economy in the EU typically overstate the miles-per-gallon figure that drivers can expect to achieve in typical driving. A recent study confirmed this once again. However, what the study also found was that MPG figures are more unrealistic for cars with smaller engines than for cars with larger engines. Actual MPG figures achieved based on typical drives for cars with small engines could be as much as 36% under the official number, while those cars with 3-liter engines would typically achieve 15% less than the official figure. These discrepancies need to be accounted for if we're going to be serious about regulating fuel efficiency. But then, we should be using gallons-per-mile instead of miles-per-gallon, too.
403 comments | about 2 months ago