An offshoot of machine learning, Deep Learning adopts various approaches to tackling the primary and most important goal of AI research: getting computers to model our world to the extent that they become capable to acquire something akin to what we humans call intelligence.
All deep learning approaches share a very basic trait on a conceptual level. Raw data is interpreted through multiple processing layers with the aid of DL algorithms. The output of the previous layer is taken as an input for each of the subsequent layers and a more abstract representation of it is created. Consequently as more and more data is fed into the correct algorithm, the easier it becomes for more and more generalized rules and features to be inferred in relation to a given scenario and, therefore, the smarter it gets at handling new, similar situations.
Two prominent examples of DL are Google Translate’s science-fiction-like “Word Lens” function which is powered by a deep learning algorithm, as also Deep Mind’s recent AlphaGo victory. However, the triumphant algorithm AlphaGo is not a pure neural net, but a hybrid, combining deep reinforcement learning and tree search – one of the foundational techniques of classical AI.
Complex computational problems such as image classification or natural language processing cannot be solved easily by simple algorithms; they are adequately addressed by Deep Learning. Yet the current business uses of DL are quite limited. Current best practices in many industries could soon be disrupted by those that currently leverage machine learning and unleash the unexploited potential for DL and deep learning based approaches. With Google’s former head of AI, John Giannandrea taking over the company’s search department, many articles in the recent past have deliberated about how DL is going to revolutionize search, and how the entire field of Search Engine Optimization is going to be radically transformed.
THE FUTURE OF PERSONALIZATION: DEEP LEARNING FUELED RECOMMENDER SYSTEMS
It is pretty certain that deep learning will be the next quantum leap in the field of personalization. Due to its proven potential it has been well established that Personalization drives sales, increases engagement and improves overall user experience and is an increasingly important focus area for businesses ranging from e-Commerce stores to publishers and marketing agencies. If we consider data to be the fuel for personalization, it follows that recommender systems are its engine. Advances in Personalization algorithms have a powerful impact on the online experiences of users across domains and platforms.
Let’s take a peek at three specific areas where deep learning can complement and improve upon the existing recommender systems.
TRANSFORMING CONTENT INTO THE RECOMMENDATION PROCESS
A standard task for recommender systems would be item-to-item recommendations, for instance the e-Commerce store or publisher site recommends a similar product or piece of content that is similar to the one presently being viewed by the user. This could be handled with the use of metadata: the typical data source is user interactions which Amazon uses and results in something like “users who bought this item also bought…” logics. But, in a large percentage of real life situations values are not assigned systematically or are entirely missing; the poor quality of metadata is a recurring problem. While meta-tags may be perfect, this type of data only represents the actual item only indirectly and in less detail than, for example a pictorial. With the help of deep learning, the actual, intrinsic properties of the content including images, video and text can be weaved into the recommendations with the aid of DL and would be less reliant on manual tagging. Extensive interactional histories, reliance on manual tagging can be obviated by DL and item-to-item relations could be based on a much more comprehensive picture of the product.
In 2014, Spotify, in order to make its song recommendations more diverse and create an improved personalized experience for its users; attempted to incorporate the content into a recommender system Their music streaming service used a collaborative filtering method in its recommendation systems. A Ph.D. student and intern at Spotify, Sander Dieleman realized this to be their biggest drawback, since any approach that banks entirely on usage data inevitably ignores and therefore under-represents hidden gems and not so recognized songs of upcoming artists – who are the must-haves of music discovery. To overcome this flaw, Dieleman used a DL algorithm that he tutored on 30-second excerpts from half a million songs to analyze the music itself. The result was that; as in the case of image classification problems, successive layers of the network learn progressively more complex and invariant features of the songs. On the topmost fully-connected layer of the network, just prior to the before the output layer, the learned filters were able to be very selective for certain sub-genres like Gospel, Chinese Pop or Deep-House. This implies, now a system can effectively make recommendations based on solely the similarity of songs- an ideal feature for putting together personalized playlists. While it is not known whether or not Spotify utilized these findings for its algorithm, the experiment was, doubtless significant.
HANDLING THE ‘COLD-START’ PROBLEM
The cold-start is the biggest impediment and can be considered to be the arch-rival of recommendation systems. It affects both users and items. For users, it implies the system has very little or no information on the customer’s behavior and preferences. Whereas the item cold-start means the lack of user interactions with the data upon which item-to-item relations can be drawn – though the metadata is still there, but that is not adequate for truly fine-tuned recommendations. The item cold-start is an obvious domain for the content-based approach, as it makes the system less dependent on transactional and interactional data.
But, developing meaningful personalized experiences for new users is a complex problem that is not necessarily solved by just collecting more information on them. Let’s study the e-Commerce sites or online marketplaces with wide product portfolios. Here, typically customers visit a website with completely different goals over time. Initially the visit may be to buy a television, and the subsequent visit could be to hunt for a book. In this situation, the data gathered in their first visit is not relevant to the second session.
The session based or item-to-session recommendations is an interesting approach to handle the cold start problem. In this approach, the system splits the entire interactional history data of the customers into separate sessions, instead of relying completely on it. The model which captures the customer’s interests then builds on session-specific clickstreams. With the adoption of this approach, it is highly likely that future recommender systems will not depend excessively on detailed customer profiles built over large durations (months or even years), in fact, they would be able to make fairly relevant recommendations after the customer clicks for a short duration on the website.
This is a not so well researched area, yet promises a great opportunity for enhancing personalized online experiences. Gravity’s R&D researchers working on the European Union funded CrowdRec project recently published a paper that describes a Recurrent Neural Network (RNN) approach to offering session-based recommendations. It is the first research paper that attempts to utilize DL for session based recommendations. The results demonstrate that their method significantly outperforms the presently used state-of-the-art algorithms for this job.
THE DEFINING MOMENTS: THE MOMENTS OF TRUTH
The defining moments of truth are the very short periods of time when users take their decisions depending upon the company’s communication and the available information given by them. While momentary and impulsive impressions are also major factors, these decisions are by and large heavily influenced by long-term, personal preferences, and brand loyalty. Further novel insights into the intrinsic human decision process can be obtained by utilization of a DL based approach to wooing users during these golden Moments of Truth.
It is a well known fact that attractive images of a product can boost sales – entire industries are built around making photos of furniture rooms or exotic cuisine etc. However it would worth it to assess through a DL-based image analysis approach as to what exactly are the visual characteristics of a product image that have substantial positive impact on sales.
And the list goes on…Personalization is undoubtedly a must-have today in the internet industry and DL will harness the huge potential in this field. Businesses will have no option but to closely monitor the advancements in DL to retain their cutting edge and remain competitive.
Manu Jeevan is a self-taught data scientist and loves to explain data science concepts in simple terms. You can connect with him on LinkedIn, or email him at manu@bigdataexaminer.com.
Learn R, Python, basics of statistics, machine learning and deep learning through this free course and set yourself up to emerge from these difficult times stronger, smarter and with more in-demand skills! In 15 days you will become better placed to move further towards a career in data science. Upgrade to the specialization programs at attractive discounts!
Don't Miss This Absolutely Free, No Conditions Attached Course