• MANTIS is an Advertising-centric Video Understanding Platform
  • Rapidly detect inappropriate and unfavorable content in video prior to ad placement
  • Identify, categorize and classify arbitrary activity in video
  • Improve brand perception by ensuring ad placement next to relevant video activity
Press play to learn more about MANTIS

01 —— What is Mantis

MANTIS is a state-of-the-art activity-based, advertising-centric automated video understanding platform.

Mantis utilizes ground-breaking research in deep-learning strategies & architectures to perform activity detection & video summarization in faster than real-time (5-10x faster on a single GPU).
It uses advanced activity & object detection techniques developed by mimicking human cognitive processes that tackle the same problem. This allows for fine-grained video content categorization & classification.
Deep learning strategies
Activity & object detection
Video content categorization & classification
Mantis benefits from exposure to large volumes of video content allowing for increasing levels of classification accuracy.

02 —— Why Mantis

Advertiser awareness of negative brand and promotional placement in inappropriate videos has generated the need for improved video classification and summarization.

Mantis helps advertisers avoid common pitfalls in video ad placement by ensuring that ads are linked to content that is vetted for brand-safety & brand-relevance.
Maximize brand-safety & brand-relevance for your clients.

With MANTIS, advertisers, content developers & owners will:

Gain assurance that their ads will not be associated with unfavorable online video content.
Place ads alongside desired and relevant activities within a video.
Categorize & classify video content.

Mantis advertiser solution network

03 —— Mantis Features

We’ve developed technology that detects and understands human activity whilst simultaneously perceiving the pitfalls advertisers face in vetting video content for promotional purposes, forging the opportunity for us to utilize this technology to address advertisers’ needs with online video.

Key features:

I.  Activity recognition
Train MANTIS to recognize arbitrary activities in any number of videos.
II.  Classification & categorization
Custom classification of individual videos per campaign or platform needs (e.g. favorable/unfavorable). Categorize videos based on detected activities within.
III.  Ad-placement
Place ads at brand-relevant points within video based on detected activities.

04 —— Team Mantis

Dr. Bernard Ghanem
Project Director

Bernard Ghanem is currently an Associate Professor in the CEMSE division and a member of the Visual Computing Center at King Abdullah University of Science and Technology (KAUST).

Before that, he was a Senior Research Scientist at the University of Illinois Urbana-Champaign (UIUC) in Singapore, where he still holds an adjunct position. He heads projects that develop algorithms in computer vision, machine learning, and optimization geared towards real-world applications, including semantic video analysis in sports and automated surveillance, large-scale activity recognition, and 2D/3D scene understanding.

He received his Bachelor’s degree in Computer and Communications Engineering from the American University of Beirut (AUB) in 2005 and his MS/PhD in Electrical and Computer Engineering from UIUC in 2010. His work has received several awards and honors, including the Henderson Graduate Award from UIUC, two consecutive CSE fellowship awards from UIUC, a Best Paper Award (CVPRW 2013), a two-year KAUST Seed Fund, a Google Faculty Research Award in 2015, and the best business plan award in the Vision Industry Entrepreneur Workshop (VIEW) at CVPR 2016. He has co-authored more than 50 peer reviewed conference and journal papers in his field, as well as, 4 patents. He is also a co-founder of AutoScout Inc. that provides automated solutions for sports video analytics.

Feras Almaddah

CEO with highly business development capacity both for government and semi government relations. Highly organized and exceptional communicator with strong negotiation skills, problem resolution, and client needs assessment aptitude; effective in promoting products, services and identifying opportunities to increase company profits though imitative engineering.

Fabian Caba
Computer Vision Researcher

Fabian Caba is a Ph.D. student at King Abdullah University of Science and Technology, currently focused on the development of novel Computer Vision techniques for video understanding.

He is part of the multicultural and diverse Image and Video Understanding Lab (IVUL) advised by Bernard Ghanem. He received a Master of Science degree in Electronics Engineering from Universidad del Norte, where he worked with Juan Carlos Niebles on efficient video annotation using crowdsourcing. Recently, Fabian landed an internship at Adobe, where he worked on weak supervised action localization over large scale sets of web videos.

He and his colleagues designed, organized and hosted The ActivityNet Large Scale Activity Recognition Challenge for two consecutive years at the premier annual Computer Vision event, CVPR. The challenge attracted a large number of participants, and it was sponsored by several industrial partners including Google DeepMind, NVidia, Qualcomm, and Panasonic.

Humam Alwassel
Computer Vision Researcher

Humam Alwassel is currently perusing his MS degree in Computer Science at King Abdullah University of Science and Technology (KAUST), where he is a member of the Image and Video Understanding Lab (IVUL) in the Visual Computing Center.

His research focuses on human activity detection in untrimmed videos, deep learning, and computer vision. He received his double Bachelor's degrees in Computer Science and Mathematics from Cornell University in 2016. He is a recipient of the prestigious KAUST Gifted Student Program (KGSP) scholarship (2010-2016) and has graduated from Cornell with summa cum laude honors.

He is a Program Chair for The ActivityNet Large Scale Activity Recognition Challenge.

Victor Escorcia
Computer Vision Engineer

Victor Escorcia is a scientist and engineer currently pursuing his Ph.D. degree in Electrical Engineering at King Abdullah University of Science and Technology (KAUST).

He is a member of the Image and Video Understanding Lab (IVUL) in the Visual Computing Center. Victor designs and develops novel algorithms for understanding events occurring in continuous video streams bootstrapping deep learning algorithms and computer vision knowledge.

He got a Master of Science as well as Bachelor degree in Electronics Engineering from the Universidad del Norte in Colombia where he graduated with the highest honors. Victor has more than five years experience in machine learning and artificial intelligence applied to images and videos.

Yasser Alireza
Communications & Marketing Director

Yasser Alireza is the director of visual communication at Noryan Corp, managing creative communication needs across all divisions, including security & defense, IT systems and product development.

Yasser’s experience in the field of visual communication involves brand development and adverting through his position as senior art director and English copywriter for Memac Ogilvy & Mather in Jeddah, Saudi Arabia. He helped rollout several pitches and campaigns for regional accounts, such as Al Marai, Unilever, Sunbulah Group, Goody and Kodak, among others.

He co-owned and operated a creative services studio based in Dubai called Turba Studios, where he developed several promotional material and branding projects for local and regional clients like APCO Worldwide, Sunbulah, Saudia Airlines and Dubai Islamic Economic Development Center (DIEDC). His additional experience includes publishing, as well as social medial community management and development, where he has collaborated with Disney Arabia, Novo Cinemas and The Middle East Film and Comic Convention (MEFCC).


05 —— Get in touch

Do you like what you see?
Drop us a line for a private demo: