New Telebimy Technology

What new technology does is create new opportunities

Month: March 2018

Big data for text Next generation text understanding and analysis

News portals and social media are rich information sources, for example for predicting stock market trends. Today, numerous service providers allow for searching large text collections by feeding their search engines with descriptive keywords. Keywords tend to be highly ambiguous, though, and quickly show the limits of current search technologies. Computer scientists from Saarbrücken developed a novel text analysis technology that considerably improves searching very large text collections by means of artificial intelligence. Beyond search, this technology also assists authors in researching and even in writing texts by automatically providing background information and suggesting links to relevant web sites.

Ambiverse, a spin-off company from the Max Planck Institute for Informatics in Saarbrücken, will be presenting this novel technology during Cebit 2016 in Hannover from 14 to 18 March at Saarland’s research booth.

Living in the age of business smartphones and enterprise chatrooms, most information in companies is not distributed via spoken words but rather through e-mails, databases, and internal news portals. “According to a survey by the market analyst Gartner, a mere quarter of all companies are using automatic methods to analyze their textual information. By 2021, Gartner predicts 65 per cent will do so. This is because the amount of data inside companies is continuously growing and hence, it becomes more and more costly to have it structured and to search it successfully,” says Johannes Hoffart, a researcher at the Max Planck Institute for Informatics and founder of Ambiverse. His team developed a novel text analysis technology for analyzing huge amounts of text where massive computing power and artificial intelligence (AI) are continuously “thinking along” in the background.

“For analyzing texts, we rely on extremely large knowledge graphs which are built upon freely available sources such as Wikipedia or large media portals on the web. These graphs can be augmented with domain- or company-specific knowledge, such as product catalogs or customer correspondences,” says Hoffart. By applying complex algorithms, these texts are screened further and analyzed with linguistic tools. “Our software then assigns companies and areas of business to their corresponding categories, which allows us to gather valuable insights on how well one’s own products are positioned in the market in comparison to those of the competitors,” he explains. Particularly challenging hereby is the fact that product or company names are anything but unique and tend to have completely different meanings in different contexts, making them highly ambiguous.

“Our technology helps to map words and phrases to their correct objects of the real-world, that way resolving ambiguities automatically,” explains the computer scientist. “Paris” for example stands for the city of light and the French capital, but also for a figure from Greek mythology or a millionfold-mentioned party girl with German ancestors — always depending on context. “Efficiently searching huge text collections is only possible if the different meanings of a name or a concept are correctly resolved,” says Hoffart. The smart search engine developed by his team continuously learns and improves over time, and also automatically associates new text entries to matching categories. “These algorithms are hence attractive for companies that analyze online media or social networks to measure the degree of brand awareness for a product or the success of a marketing campaign,” says Hoffart further.

At Cebit, Ambiverse will further present a smart authoring platform that assists authors in researching and writing texts. Users who enter texts are automatically provided with background information, for example company-internal guidelines and manuals or web links. “Relevant concepts are linked automatically and links for further research are shown” says the computer scientist.

Visitors to the Ambiverse Cebit booth (hall 6, booth 28) will also have the opportunity to compete with their novel AI technology by playing a question-answering game. Ambiverse is funded by the German Federal Ministry for Economic Affairs through an EXIST Transfer of Research grant.

HP Puts the Future of Computing On Hold

Plans by Hewlett-Packard for computers based on an exotic new electronic device called the memristor are scaled back.

In April I wrote about an ambitious project by Hewlett-Packard to use an electronic device for storing data called the memristor to reinvent the basic design of computers (see “Machine Dreams”). This week HP chief technology officer Martin Fink, who started and leads the project, announced a rethink of the project amidst uncertainty over the memristor’s future.

Fink and other HP executives had previously estimated that they would have the core technologies needed for the computer they dubbed “the Machine” in testing sometime in 2016. They used the timeline at the bottom of this post to sketch out where the project was headed.

But the New York Times reported yesterday that the project has been “repositioned” to focus on delivering the Machine using less exotic memory technologies–the DRAM found in most computers today and a technology just entering production called phase change memory, which stores data by melting a special material and controlling how it cools.

With memristors out of the picture, there’s reason to doubt just how revolutionary HP’s project can be.

The main feature of the Machine’s design was to be a large collection of memristor memory chips. They would allow computers to be more powerful and energy efficient by combining the best properties of two different components of today’s machines: the speed of the DRAM that holds data while a processor uses it, and the capacity and ability to hold data without power seen in storage drives based on hard disks or flash memory.

Prototypes of the Machine built with DRAM and phase change memory in the place of memristors had always been part of the plan. But when I met Fink and others working on the project I also heard that those technologies would hobble the idea at the heart of the Machine.

Because DRAM can’t store data very densely and must always be powered on, computers built around a large block of it will require a lot of space and power. Meanwhile, phase change memory is too slow compared to DRAM to be much use for data being worked on. When I met Stan Williams, who leads HP’s work on memristors, he dismissed the idea that any other technology could be used to reinvent the basic design of computers as HP wanted. Fink did a good job in this 2014 blog post of explaining why his team believed only memristors could build the Machine.

Still, this week’s climb down is not a complete surprise. Fink used the timeline below as recently as December 2014, predicting that memristor memory would “sample” in 2015 and be “launched” in 2016. But a few months later, in February of this year, he told me that sampling was most likely in 2016–an estimate that HP’s manufacturing partner SK Hynix would not confirm. Microelectronics experts I spoke to said that it looked to be challenging to make reliable memristors in large, dense arrays as needed to make a memory chip.

HP now appears to be avoiding making any prediction for when the technology will be mature. The company has not yet responded to a request for comment.

Powered by WordPress & Theme by Anders Norén