Emerging Technologies

How AI can save our humanity

A wonderful talk by renowned computer scientist Kai-Fu Lee (@kaifulee).

AI is massively transforming our world, but there's one thing it cannot do: love. In a visionary talk, computer scientist Kai-Fu Lee details how the US and China are driving a deep learning revolution -- and shares a blueprint for how humans can thrive in the age of AI by harnessing compassion and creativity.

An evening with Mike Butcher from Tech Crunch

 

Mike Butcher MBE, is Editor-at-large of TechCrunch, the biggest breaking news site about the world’s hottest tech companies. Mike has been named one of the most influential people in technology, and is a regular commentator on the tech business. He founded the Europas Conference & Awards, the charity Techfugees, and has been an advisor on startups to the British Prime Minister and the Mayor of London. He was awarded an MBE in the Queen’s Birthday Honours list 2016 for services to the UK technology industry and journalism.

We had the privilege of spending an evening Mike, and hearing him speak about Artificial Intelligence and the future of technology.
Needless to say it was quite enlightening…

AHA Moments in Deep Learning

"Zendesk's Answer Bot uses deep learning to understand customer queries, responding with relevant knowledge base articles that allow customers to self-serve. Research and development behind the ML models underpinning Answer Bot has been rewarding but punctuated with pivotal deviations from our charted course"

I recently had the opportunity to hear Zendesk Data Scientists, Arwen Griffioen and Chris Hausler talk about their journey from product ideation to launch, starting with a traditional customer-base d machine learning approach, and ending with a single global deep learning model that serves tens of thousands of accounts. 

This was a fantastic talk that gave a really good insight into how the Zendesk Machine Learning team works and what they value. Both Arwen and Chris have research backgrounds, which is always great to see. Arwen has a computer science background and finished her PhD on ecological modelling (using MLA)  in 2015.  Chris has a computational neuroscience background and finished his PhD in 2014.  

Apparently the data team at Zendesk is like a team sport a real mix of talent: engineers, software developers and data scientists all working together towards a common goal. I love that any additional trainging (i.e., deep learning) is done as a team and includes everyone, regardless of their specific role. I’ve heard of data science teams where only the most senior are allowed to up skill at work and then pass on the knowledge — the rest have to do it in their own time — which is ludicrous. High performance teams work when people are encouraged grow and learn and develop new skills. 

The talk began with the anatomy of a data product. I loved their iceberg analogy. While things may appear to be advancing smoothly (at least in press releases, conference talks, shareholder letters), the bulk of the time is really spent researching new methods, trying things that ultimately fail — life would be terribly boring if we had all the answers º — designing, testing, and re-engineering. 

Supervised classification:

To build the Answer Bot, the team started out with a fairly simple machine learning model. By simple I mean supervised learning using NLP on a software ticket, and using a logistic classifier to predict the most relevant help document or article. The assumption was that this would be fairly accurate, performant, familiar and explainable. Because the industry and therefore context around tickets varies broadly for all of Zendesk’s clients, labelled data would need to be provided for each client, and because the Answer Bot learns on the job and improves with more data, you can’t really switch it on from the get go.  For this to work well the team needed to spend a lot of time preprocessing data. 

Unsupervised classification:

The team explored unsupervised classification, using both tickets and articles as inputs which worked well, except that it would require ~100,000 different models (for each client) and it takes a really long time to train. Part of the reason is that the same words can has a very different meaning depending on the user, and different industries have different sets of words. For example "ticket" may mean an issued ticket, given so that someone can join a queue, or it could be something that is purchased, for example a movie ticket. Answering a  question such as "what do I do if I lose my ticket" requires a good understanding of context. If you try to build a single model will all the words the dictionary, you're going to run out of parameters pretty quickly. 

Pivoting to Deep Learning:

This happened quite a few months down the track and came partly out of their journal club. They essentially  started from scratch, and this required loads of reading and retraining the whole team. A lot of uncertainty and not really knowing what they were doing, but with some knowledge that NLP problems work well with deep learning and the more data you can throw at it the better. Zendesk has no shortage of data. After the talk I asked Arwen how much she and the team knew about deep learning before coming to Zendesk and her answer was “basically nothing” (I love this company!)

The team split into two groups and tackled various aspects of the problem. I wasn’t surprised to hear they use TensorFlow. I was really pleased to hear Chris say that problem solving is a creative process — the mark of a great researcher, and not something you can learn easily. 

The initial perceptions of deep learning were that you could develop one robust model, that it would work well, and that the more data you threw it at the better it would work. This is one of my big worries about machine learning and deep learning. Weights are determined as if by magic, loss functions are calculated and "accurate" results are taken as gospel. From my experience with astronomy data, I can tell you right now that if you start with ALL THE CRAPPY DATA you can still get a good fit, after all you just need to keep adding parameters — seven dimensional string theory anyone? BUT.... The result will inevitably meaningless. Chris summed this up eloquently; "if you put shit in, you're going to get shit out"... or something to that effect. So this is where things get really exciting. This is where you have to go back and figure out each step of the miracle that is deep learning and exploring everything that’s going on and what could be implemented, whether the data introduces unintended biases — turns out datasets with large numbers tickets were artificially skewing things, and whether there are overfitting problems (Hint: unless you have an underlying physical model there will almost always be overfitting problems).

Of course the hard work paid off and it sounds like they’ve come up with a bloody good solution.  The entire process took six months and it was a good year before the product was considered reliable enough for deployment. They spent quite a lot of time validating the model, developing reliable performance metrics, ensuring consistency, and taking the time to do proper human user testing. I was both surprised and pleased that Zendesk allowed the team spend so much time researching. Since I’ve never worked at a tech company I’m not sure what would be considered normal, but my impression is that many data science teams are expected to data analysis results out on pretty short timescales, regardless of data quality.

Lessons from the team:

  • ML products are really hard work.
  • “Vanilla” ML works really well. Logistic regression and Random Forrest work really well.
  • Always start with the simplest model.
  • Deep learning isn’t magic
  • When it finally works, it’s great.

Big Universe, Big Data: Machine Learning and Image Analysis for Astronomy

 

Big Universe, Big Data: Machine Learning and Image Analysis for Astronomy

by Jan Kremer, Kristoffer Stensbo-Smidt, Fabian Gieseke, Kim Steenstrup Pedersen, and Christian Igel, from the University of Copenhagen. Article published in IEEE Intelligent Systems, 32 (2):16-22, 2017

I recently came across the above conference paper/journal article on Twitter.

It talks about data rates for future astronomical surveys and facilities, specifically the Large Synoptic Survey Telescope (LSST) – which will generate roughly 30Tb per night,  the Thirty Meter Telescope (TMT), and the rise of citizen science projects to support part of the data analysis. Interestingly none of the five authors are astronomy researchers, but data scientists or computer scientists specialising in machine learning, big data analytics, computer vision and image analysis. I'm not sure how often large scale astronomy projects feature in other other disciplines, but it certainly got my attention. They bring a unique and valuable perspective to the discussion of big-data in astronomy and the challenges that need to be faced in future large surveys.

The authors talk about how astronomical big-data can trigger advancements in machine learning and image analysis. In astronomy I think it's quite rare that the this is discussed in detail, other than noting that that astronomy "data analysis techniques are translatable" and that large projects tend to drive innovation in large scale computing and data management, as well as advanced the development of detector technologies, and lightweight engineering and infrastructure, among other things. Perhaps because machine learning is in such an early stage in astronomy  we are only really starting to get a good handle on how useful it can be for research, let alone improving the algorithms themselves.

In the second half of their paper the authors talk about describing the shape of the galaxy using machine learning, and how the;

"star formation rate (SFR) could be predicted from the shape index, which measures the local structure around a pixel going from dark blobs over valley-, saddle point-, and ridge-like structures to white blobs." 

This is a pretty big claim since it requires some understanding of how the properties of light in a given filter, translate to the physics of stellar emission. I need to look at the two papers they reference to see to see how they extractinformation on SFR or their ideas about how you could do this. Assuming you had a good working knowledge of the HST detectors, the image filters used, and a robust method for measuring the photometry then it's possible that you could draw some reasonably good results from the shape index. Of course what you don't see in an astronomical image (the interconnected dust lanes that obscure light) is scientifically just as important as what you do see. 

The also talk about sample selection bias in machine learning, and note that this a real challenge in astronomy. Training sets are typically created with "old" surveys,  the most comprehensive being the Sloan Digital Sky Survey (SDSS). Whereas future astronomy surveys will be taken with far superior cameras, with ground-based telescopes with much larger collecting areas – resulting in deeper images, and space-based telescopes that will see the Universe quite literally in a different light. So the challenge is then being able to create reasonably good proxy training sets. The effects of selection bias can be mitigated, to some extent, using importance–weighting, where more weight to examples in the training sample that lie in regions of the feature space underrepresented in the test sample, and vice versa. The challenge lies in reliably estimating these weights.

In the final section, the authors address the issue of interpretability of machine learning models. This is something that I think about quite a lot when discussing the merits of data-driven discovery using data-mining and machine learning models. Ultimately you're aiming to answer scientific questions, and you need to be able to feed observations and measured parameters back into theoretical models.  The problem is that by using purely data-driven techniques that you may end up with accurate models and predictions, but the results may be not be meaningful, especially if thee violate the laws of physics. On the other hand machine-learning to extract or predict potentially interesting classes of objects is really useful, particularly for projects like LSST that will detect millions of transients each night...

 

 

 

 

Machine learning: the power and promise of computers that learn by example

A few days ago the UK Royal Society published their two-year policy project on Machine learning: the power and promise of computers that learn by example.

The project began in November 2015 with the aim of  investigating the potential of machine learning over the next 5-10 years, and the barriers to realising that potential in the UK. As it carried out its investigation, the project engaged with key audiences – in policy communities, industry, academia, and the public – to raise awareness of machine learning, understand views held by the public and contribute to public debate about this technology, and identify the key social, ethical, scientific, and technical questions that machine learning presents.

The full report (PDF, 3.3Mb) published on the 25th April 2017, comes at a critical time in the rapid development and use of this technology, and the growing debate about how it will reshape the UK economy and people’s lives. 

Day 3: Pause Fest 2017

 

Highlights from the Twitterverse