Community-Specific AI: Building Solutions for Any Audience
Community-Specific AI: Building Solutions for Any Audience


With half of the world population online, and spending over 5 hours a day there, online communities are flourishing. It is now easier than ever for niche communities to form: gamers can find other players and form teams, dating adults can find better matches, students of particular subjects can find teachers and help each other. With faster networks images, audio, and video, are increasingly complimenting text, creating a richer experience.

However, as these communities grow wider and deeper, they can become a target for toxic behavior. Forums for underage users can be subverted with users attempting illicit solicitations and exploitation. Chat rooms can see participants engaging in cyberbullying and toxic language. According to Pew Research, over half of online users have seen offensive name calling and intentional embarrassment, while a quarter have witnessed physical threats and even prolonged harassment. This clearly has to stop. Businesses must now manage their communities in such a way that first and foremost protects their users, while also guarding their brand’s reputation.

The traditional way to address this issue was through human moderation. Companies like Google and Facebook hire thousands of moderators to respond to flagged content and unwanted activities while respecting the user’s desire for sharing and self-expression. While achieving its goal, this approach is unscalable, and beyond the means of most other businesses. More recently, advancements in technologies such as Natural Language Processing (NLP) has signaled great promise. But off-the-shelf solutions typically lack the power to represent the unique shared terminology and conversational pattern (eg. dating chats vs gaming chats) that each community exhibits, limiting their usefulness.

At Spectrum Labs, we develop community-specific AI solutions that aim to identify and adapt to the toxic online behaviors. We aid our clients in detecting inappropriate content and deliver insights around how their users interact with their products and with each other, multiplying the impact of moderators not only in responding quickly but also proactively. There are no off-the-shelf solutions, with each community enjoying a unique set of models that can properly address and adapt its unique needs. In our talk, we review how we tackle the problems of identifying toxic content (eg. hate speech, cyberbullying, illicit solicitations) while handling the issues of the cold-start and audience-specific language. We'll cover topics that will be of interest to teams that need to tackle multiple NLP problems across various domains where the speech and content patterns may change significantly.


Dr Yacov Salomon, a partner at Super{set} and co-founder of Backbone, has headed multiple data science and engineering organizations and teams in companies including Salesforce, Krux, Bigcommerce and Brandscreen. He was responsible for research, design and development of intelligent systems and applications for large-scale AI and machine learning, data mining and analytics. Yacov holds a Ph.D. in applied mathematics and his academic research focused on the areas of probability, information theory and non-parametric statistics. He is also a Lecturer in machine learning and data science at UC Berkeley.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Consent to display content from - Youtube
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google