Creating complex predictive tools

“We were initially approached by an online game provider that used a ‘freemium’ model — players could play for free, but could receive upgrades by paying a fee to become premium users,” says William Rand, an assistant professor of business management at NC State and co-author of a paper on the work. “The company wanted to know what incentives would be most likely to convince players to become premium users. That was the impetus for the work, but what we found is actually relevant for any company or developer interested in incentivizing user investment in apps or online services.”

A preliminary assessment indicated that access to new content was not the primary driver in convincing players to pay a user fee. Instead, player investment seemed to be connected to a player’s social networks.

To learn more, the researchers evaluated three months’ worth of data on 1.4 million users of the online game, including when each player began playing the game; each player’s in-game connections with other players; and whether a player became a premium user.

Using that data, the researchers created a computer model using agent-based

Internet is rife with recommendation systems

But many existing approaches to making recommendations are simplistic, says physicist and computer scientist Cristopher Moore, a Santa Fe Institute professor. Mathematically, these methods often assume people belong to single groups, and that each one group of people prefers a single group of items. For example, an algorithm might suggest a science fiction movie to someone who had previously enjoyed another different science fiction movie — — even if the movies have nothing else in common.

“It’s not as if every movie belongs to a single genre, or each viewer is only interested in a single genre,” says Moore. “In the real world, each person has a unique mix of interests, and each item appeals to a unique mix of people.”

In a new paper in the Proceedings of the National Academy of Sciences, Moore and his collaborators introduce a new recommendation system that differs from existing models in two major ways. First, it allows individuals and items to belong to mixtures of multiple overlapping groups. Second, it doesn’t assume that ratings are a simple function of similarity ; — instead, it predicts probability distributions

Congestion of mobile network

We are all faced with situations in the New Year or other holidays when we cannot get on the telephone because the system is overloaded. Mathematical calculations, particularly the methods of queuing theory, allow solving such problems — says Svetlana Moiseeva, Professor of the Tomsk State University (TSU). — Creating and studying the mathematical models of real telecommunication streams, information systems, and computer networks is very relevant today.

The team, led by Professor Anatoly Nazarov, for several years has been developing models and methods to choose a rational structure for the service system. Such problems are usually solved by creating a program for each specific case. TSU mathematicians managed to invent a universal method, which is suitable for solving a very broad class of problems associated with queuing.

– We have derived the general formula for the calculation: it is enough to substitute for the variables specific parameters, such as the number of servers, towers, communication channels, and others and you can find out under what conditions the system will run smoothly, — says Anatoly Nazarov. — Using this method will enable significant savings on

Nanodiamonds may be boost for quantum computing

Currently, computers use binary logic, in which each binary unit — or bit — is in one of two states: 1 or 0. Quantum computing makes use of superposition and entanglement, allowing the creation of quantum bits — or qubits — which can have a vast number of possible states. Quantum computing has the potential to significantly increase computing power and speed.

A number of options have been explored for creating quantum computing systems, including the use of diamonds that have “nitrogen-vacancy” centers. That’s where this research comes in.

Normally, diamond has a very specific crystalline structure, consisting of repeated diamond tetrahedrons, or cubes. Each cube contains five carbon atoms. The NC State research team has developed a new technique for creating diamond tetrahedrons that have two carbon atoms; one vacancy, where an atom is missing; one carbon-13 atom (a stable carbon isotope that has six protons and seven neutrons); and one nitrogen atom. This is called the NV center. Each NV-doped nanodiamond contains thousands of atoms, but has only one NV center; the remainder of the tetrahedrons in the nanodiamond are made solely of carbon.

It’s an atomically small distinction, but it makes a big difference.

“That little dot, the NV center, turns

Wearable fitness tracker

The wearable device industry is estimated to grow to more than $30 billion by 2020. These sensors, often worn as bracelets or clips, count the number of steps we take each day; the number of hours we sleep; and monitor our blood pressure, heart rate, pulse and blood sugar levels.

The list of biophysical functions these devices can measure is growing rapidly. “But nobody has yet figured out a way to translate the information gathered by these devices into measures of health and longevity, let alone monetize this information — until now,” says S. Jay Olshansky, professor of epidemiology and biostatistics at the University of Illinois at Chicago School of Public Health and chief scientist at Lapetus Solutions, who is lead author on the paper. The researchers report that for the first time, the trillions of data points collected by wearable sensors can now be translated into empirically-verified measures of health risks and longevity — measures that have significant financial value to third parties like mortgage lenders, life insurance companies, marketers and researchers.

In the study, Olshansky and colleagues use the number of steps taken daily — a measure collected by almost all wearable sensors — and show how, using scientifically verified

Scientists have used machine learning algorithms

Insects that feed by ingesting plant and animal fluids cause devastating damage to humans, livestock, and agriculture worldwide, primarily by transmitting pathogens of plants and animals. These insect vectors can acquire and transmit pathogens causing infectious diseases such as citrus greening through probing on host tissues and ingesting host fluids. The feeding processes required for successful pathogen transmission by sucking insects can be recorded by monitoring voltage changes across an insect-food source feeding circuit.

In this research, entomologists and computer scientists at the United States Department of Agriculture-Agricultural Research Service (USDA-ARS), University of Florida, and Princeton University used machine learning algorithms to teach computers to recognize insect feeding patterns involved in pathogen transmission.

In addition, these machine learning algorithms were used to detect novel patterns of insect feeding and uncover plant traits that might that lead to disruption of pathogen transmission. While these techniques were used to help identify strategies to combat citrus greening, such intelligent monitoring of insect vector feeding will facilitate rapid screening and disruption of pathogen transmission causing disease in agriculture, livestock, and human health.

Web to improve its

Information extraction — or automatically classifying data items stored as plain text — is thus a major topic of artificial-intelligence research. Last week, at the Association for Computational Linguistics’ Conference on Empirical Methods on Natural Language Processing, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory won a best-paper award for a new approach to information extraction that turns conventional machine learning on its head.

Most machine-learning systems work by combing through training examples and looking for patterns that correspond to classifications provided by human annotators. For instance, humans might label parts of speech in a set of texts, and the machine-learning system will try to identify patterns that resolve ambiguities — for instance, when “her” is a direct object and when it’s an adjective.

Typically, computer scientists will try to feed their machine-learning systems as much training data as possible. That generally increases the chances that a system will be able to handle difficult problems.

In their new paper, by contrast, the MIT researchers train their system on scanty data — because in the scenario they’re investigating, that’s usually all that’s available. But then they find the limited information an easy problem to solve.

“In information extraction, traditionally, in natural-language processing, you are

The crush in biological cells

“Biological processes that make life happen and cause diseases largely take place inside cells, which can be studied with microscopes and other techniques, but not in enough detail,” said Michael Feig, an MSU professor of biochemistry and molecular biology who led the research project. “Our research has revealed unprecedented details about what exactly takes place inside biological cells, and how proteins in particular behave in their natural environment.”

The team set out to examine whether the crowding in biological cells alters the properties of biological molecules and their ability to carry out their function. Armed with access to the “K computer,” a supercomputer housed at the RIKEN Advanced Institute for Computational Science in Kobe, Japan, the research team was able to conduct computer simulations that model the cellular interior of a bacterium, and show a detailed view of how the various molecular components interact in a very dense environment.

“Our computer simulations were not too far away from simulating an entire cell in full atomistic detail,” Feig said. “These simulations exceeding 100 million atoms are the largest simulations of this kind and are several orders of magnitude larger than what is typical research work today.”

The powerful computer simulation led to a discovery

Better way to predict flight delays

“Our proposed method is better suited to analyze datasets with categorical variables (qualitative variables such as weather or security risks instead of numerical ones) related to flight delays. We have shown that it can outperform traditional networks in terms of accuracy and training time (speed),” said Sina Khanmohammadi, lead author of the study and a PhD candidate in systems science within the Thomas J. Watson School of Engineering and Applied Science at Binghamton University.

Currently, flight delays are predicted by artificial neural network (ANN) computer models that are backfilled with delay data from previous flights. An ANN is an interconnected group of computerized nodes that work together to analyze a variety of variables to estimate an outcome — in this case flight delays — much like the way a network of neurons in a brain works to solve a problem. These networks are self-learning and can be trained to look for patterns. The more variables an ANN has to process, the more categorical those variables are, and collecting historical data slows down an ANN to make flight delay predictions.

The Binghamton team introduced a new multilevel input layer ANN to handle categorical variables with a simple structure to help airlines easily see

Machine learning automatically identifies suicidal

A new study shows that computer technology known as machine learning is up to 93 percent accurate in correctly classifying a suicidal person and 85 percent accurate in identifying a person who is suicidal, has a mental illness but is not suicidal, or neither. These results provide strong evidence for using advanced technology as a decision-support tool to help clinicians and caregivers identify and prevent suicidal behavior, says John Pestian, PhD, professor in the divisions of Biomedical Informatics and Psychiatry at Cincinnati Children’s Hospital Medical Center and the study’s lead author.

“These computational approaches provide novel opportunities to apply technological innovations in suicide care and prevention, and it surely is needed,” says Dr. Pestian. “When you look around health care facilities, you see tremendous support from technology, but not so much for those who care for mental illness. Only now are our algorithms capable of supporting those caregivers. This methodology easily can be extended to schools, shelters, youth clubs, juvenile justice centers, and community centers, where earlier identification may help to reduce suicide attempts and deaths.”

The study is published in the journal Suicide and Life-Threatening Behavior, a leading journal for suicide research.

Dr. Pestian and his colleagues enrolled 379 patients in the

Scrambles code to foil cyber attacks

A new program called Shuffler tries to preempt such attacks by allowing programs to continuously scramble their code as they run, effectively closing the window of opportunity for an attack. The technique is described in a study presented this month at the USENIX Symposium on Operating Systems and Design (OSDI) in Savannah, Ga.

“Shuffler makes it nearly impossible to turn a bug into a functioning attack, defending software developers from their mistakes,” said the study’s lead author, David Williams-King, a graduate student at Columbia Engineering. “Attackers are unable to figure out the program’s layout if the code keeps changing.”

Even after repeated debugging, software typically contains up to 50 errors per 1,000 lines of code, each a potential avenue for attack. Though security defenses are constantly evolving, attackers are quick to find new ways in.

In the early 2000s, computer operating systems adopted a security feature called address space layout randomization, or ASLR. This technique rearranges memory when a program launches, making it harder for hackers to find and reuse existing code to take over the machine. But hackers soon discovered they could exploit memory disclosure bugs to grab code fragments once the program was already running.

Shuffler was developed to deflect this latter

Prevent hackers from remotely controlling cars

In order to remotely brake a car traveling at more than 100 kilometer per hour, it was enough for the American security researcher Stephen Checkoway to use the music player software installed in the car together with a smartphone connected to it. “If the software were not connected to the internal network, the so-called CAN bus, of that mid-range sedan, then Checkoway would have had to work harder,” explains Stefan Nuernberger, who leads the Smart Systems Lab at the German Research Center for Artificial Intelligence (DFKI).

The CAN bus was developed in 1983 by the auto industry in order to avoid having to install meter-long cable trees in cars. The advantage of a bus structure lies in that only a single transmission line is used, which interconnects all of the devices and allows them to communicate with each other. The CAN bus connects not only sensors — for example, for the speed controls — but also actuators such as servo motors. Steering devices, such as a parking assistant, also send their commands through the bus. “From the perspective of IT security, however, this harbors a crucial downside: As soon as one of the devices on the bus is controlled by an

Excessive to the point of injury

It’s a problem that doesn’t just afflict washing machines. It can be an issue with all kinds of machines that rely on vibrations and oscillations, such as industrial shaking devices used to separate different-sized gravel and other raw materials, or riddling machines that loosen the sediment stuck on the insides of a champagne bottle and make the debris easier to remove.

But now researchers have developed an algorithm that could help machines avoid getting trapped in this resonant motion. Using a combination of computer simulations and experiments, the researchers found that by carefully increasing and decreasing the speed of a rotor, they could nudge it past its resonant frequency. The rotor doesn’t get stuck in resonance like the faulty washing machine.

“Our method is analogous to pushing a car back and forth in order to get it out of a ditch,” said Alexander Fradkov of the Institute of Problems in Mechanical Engineering, Russian Academy of Sciences. He and his colleagues describe their new research this week in Chaos, from AIP Publishing.

Their method applies particularly when turning on a machine and the rotor speeds up. As it accelerates, depending on the design of the rest of the machine, it might reach a

Recognize sounds by watching video

But recognition of natural sounds — such as crowds cheering or waves crashing — has lagged behind. That’s because most automated recognition systems, whether they process audio or visual information, are the result of machine learning, in which computers search for patterns in huge compendia of training data. Usually, the training data has to be first annotated by hand, which is prohibitively expensive for all but the highest-demand applications.

Sound recognition may be catching up, however, thanks to researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). At the Neural Information Processing Systems conference next week, they will present a sound-recognition system that outperforms its predecessors but didn’t require hand-annotated data during training.

Instead, the researchers trained the system on video. First, existing computer vision systems that recognize scenes and objects categorized the images in the video. The new system then found correlations between those visual categories and natural sounds.

“Computer vision has gotten so good that we can transfer it to other domains,” says Carl Vondrick, an MIT graduate student in electrical engineering and computer science and one of the paper’s two first authors. “We’re capitalizing on the natural synchronization between vision and sound. We scale up with tons of unlabeled

A new study shows a huge US

Support for solar energy is vast. According to a 2015 Gallup poll, 79 percent of Americans want the US to put more emphasis on developing solar power. Most of the same people, unfortunately, can’t afford to install solar energy systems in their homes. Even after federal tax credits, installing solar panels to cover all of a family’s electricity needs can cost tens of thousands of dollars. For others, a home solar system isn’t a consideration because they rent, or move frequently.

But Michigan Technological University’s Joshua Pearce says he knows the solution: plug and play solar.

“Plug and play systems are affordable, easy to install, and portable,” says Pearce, an associate professor of materials science and engineering and of electrical and computer engineering. “The average American consumer can buy and install them with no training.”

In a study funded by the Conway Fellowship and published in Renewable Energy (DOI: 10.1016/j.renene.2016.11.034), Pearce and researchers Aishwarya Mundada and Emily Prehoda estimate that plug and play solar could provide 57 gigawatts of renewable energy — enough to power the cities of New York and Detroit — with potentially $14.3 to $71.7 billion in sales for retailers and $13 billion a year in cost savings for energy

Shrinks data sets for easier analysis

The methods for creating such “coresets” vary according to application, however. Last week, at the Annual Conference on Neural Information Processing Systems, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory and the University of Haifa in Israel presented a new coreset-generation technique that’s tailored to a whole family of data analysis tools with applications in natural-language processing, computer vision, signal processing, recommendation systems, weather prediction, finance, and neuroscience, among many others.

“These are all very general algorithms that are used in so many applications,” says Daniela Rus, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT and senior author on the new paper. “They’re fundamental to so many problems. By figuring out the coreset for a huge matrix for one of these tools, you can enable computations that at the moment are simply not possible.”

As an example, in their paper the researchers apply their technique to a matrix — that is, a table — that maps every article on the English version of Wikipedia against every word that appears on the site. That’s 1.4 million articles, or matrix rows, and 4.4 million words, or matrix columns.

That matrix would be much too large to analyze using

Human face recognition

The researchers designed a machine-learning system that implemented their model, and they trained it to recognize particular faces by feeding it a battery of sample images. They found that the trained system included an intermediate processing step that represented a face’s degree of rotation — say, 45 degrees from center — but not the direction — left or right.

This property wasn’t built into the system; it emerged spontaneously from the training process. But it duplicates an experimentally observed feature of the primate face-processing mechanism. The researchers consider this an indication that their system and the brain are doing something similar.

“This is not a proof that we understand what’s going on,” says Tomaso Poggio, a professor of brain and cognitive sciences at MIT and director of the Center for Brains, Minds, and Machines (CBMM), a multi-institution research consortium funded by the National Science Foundation and headquartered at MIT. “Models are kind of cartoons of reality, especially in biology. So I would be surprised if things turn out to be this simple. But I think it’s strong evidence that we are on the right track.”

Indeed, the researchers’ new paper includes a mathematical proof that the particular type of machine-learning system they use,

Smartphone says about you

Four out of five UK adults now have a smartphone with the market split 50/50 between the two rival operating systems.

Smartphones’ connection with our personalities is so marked that psychologists say smartphones have become an extension of ourselves.

Not only can they be personalised to our preferences, but even the type of smartphone reveals clues about who we are.

Researchers gave over 500 smartphone users several questionnaires about themselves and their attitudes towards their mobile phone.

A comparison of both Android and iPhone users revealed that iPhone users are more likely to be:

  • Younger
  • More than twice as likely to be women
  • More likely to see their phone as a status object
  • More extraverted
  • Less concerned about owning devices favoured by most people

In contrast, Android users were more likely to be:

  • Male
  • Older
  • More honest
  • More agreeable
  • Less likely to break rules for personal gain
  • Less interested in wealth and status

Dr David Ellis from Lancaster University said: “In this study, we demonstrate for the first time that an individual’s choice of smartphone operating system can provide useful clues when it comes to predicting their personality and other individual characteristics.”

In a second study, the psychologists were then able to develop a computer programme that could predict what type of smartphone a person owned based on

A team of experts has developed a prototype

It’s a simple scene that illustrates a milestone in the development of environments allowing humans to interact naturally with machines. In a collaboration between Rensselaer Polytechnic Institute and IBM Research, the Cognitive and Immersive Systems Laboratory (CISL) has reached that milestone, and is poised to advance cognitive and immersive environments for collaborative problem-solving in situations like board rooms, classrooms, diagnosis rooms, and design studios.

“This new prototype is a launching point — a functioning space where humans can begin to interact naturally with computers,” said Hui Su, director of CISL. “At its core is a multi-agent architecture for a cognitive environment created by IBM Watson Research Center to link human experience with technology. In CISL, we created this architecture to integrate technologies that register different kinds of human behavior captured by sensors as individual events and forward them to the cognitive agents behind the scene for interpretation. Enhancing this architecture will allow us to link new sensing technologies and computer vision technologies into the system, and to enable collaborative decision making tools on top of these technologies.”

The current capabilities of the space are rudimentary in comparison with human understanding. The room can understand and register speech, three specific gestures, the position

How people find best mental health resources

Thousands of websites and apps relating to mental health are available but the study discovered that much of the most useful material was difficult to track down in a search.

The presentation explored a finding, published in the British Journal of Psychiatry in 2013, that young people have the worst access of any group to mental health services.

The study was carried out by Dr Diane Pennington, of Strathclyde’s Department of Computer and Information Sciences.

Dr Pennington said: “There’s often an assumption that because someone is young, they will know how to use technology. They do know but it can be on quite a surface level and they need to be able to decide what services are reliable.

“Searches for mental health support may not lead to a health service site and they could find something which does not support them in a positive way. This could happen if they did a search which reflected the way they were feeling, or if they used a clinical term such as ‘depressed’.

“Many young people will not read a page if it predominantly features text, although work has been done on some sites and apps to make them easier to find and more engaging. This could