About us

We started Mantis with a clear idea of what sort of company we wanted it to be, and the sort of work that we wanted to do.

How we help you

We work with you to scope out your project

Each problem is unique and there’s no one-size-fits-all solution.

At Mantis, we’ll have a call with you to understand the problems you’re facing. We’ll then match your problems up with how we can use Natural Language Processing (NLP) to provide the best solution.

This might involve, for example, looking at a sample of your text data to have a more accurate understanding of which Natural Language Processing techniques would work most effectively. Or it might involve working with you to improve your existing NLP techniques around data so your data is in the best shape for future use.

We’ll then recommend options for moving forward and what the estimated costs and timeline would be.

A Mantis NLP project overview

1

Read

We will review the relevant literature for inspiration on the latest and most stable Natural Language Processing techniques to solve the problem.

2

Implement

We design an algorithm (also known as a model) and implement a training process for the model to learn - we use this process to either create your first model or help improve an existing model’s performance

3

Deploy

We deploy the model onto your infrastructure of choice, such as a cloud provider as a Docker container, or a standalone software package.

4

Monitor

We set up monitoring for the model’s performance to spot dips in performance early and work out how and when to optimise and recalibrate your model, which keeps the model’s response accurate and effective.

How an AI model progresses

An NLP model is not perfect when it is first developed. A model improves through experimentation and recalibration after its initial development. Through this continued fine-tuning, a model can better solve the problem it’s been designed for.

Baseline

The baseline model is not perfect, but it’s a first attempt that gives us reasonable results for a task, as well as direction for where to apply more complex solutions.

Advanced

Transitioning from a baseline model to an advanced model needs experimentation with different NLP techniques, parameters and ways of approaching the problem, which all combine to improve a model’s performance.

Optimised

If a model needs to run on a different infrastructure or device (for example a mobile phone), or needs to interact with the user while running in real-time (for example an AI-powered chatbot), it needs to be optimised.

Our team

Matt

London

Matt is an environmental scientist by background with a PhD in forestry. He left academia in 2015 and has worked as a data scientist in a number of UK Government departments, startups, and NGOs. Matt has experience of working in the Legal, Health, and Government sectors, where his work focussed on developing reproducible and transparent ways of working and natural language problems.

Nick

Nicosia

Nick has studied computer science at Imperial College where his thesis was about detecting lies from video. He has been working as a data scientist for more than 8 years in various startups and lately at the Wellcome Trust. Since his transition to the industry he switched focus from Computer vision to NLP and has experienced working with datasets from a few hundreds to millions of examples and thousands of classes. He has a strong interest in reproducible, reusable and open source code.

We believe

  1. Experiments should be easily reproducible. This encourages more trust in the results, enables better collaboration within a team, and reduces duplication of work.
  2. Data reflect human biases. AI and as an extension Natural Language Processing applications mimic those biases and can cause unintended harm. Detecting and reducing ethical risks is the responsibility of all data scientists.
  3. AI and NLP is not a neutral technology. Applying Natural Language Processing tools for military, political or marketing purposes has an impact on our democracy and free will. We think carefully about the impact our work might have, and are selective about the industries and clients we work with.
  4. Open source software and practices have made, among other things, the AI revolution possible. They accelerate progress, and encourage collaboration between people and organisations. We use open source by default, and contribute to the open source community where possible.
  5. It’s our responsibility to combat climate change to support a more sustainable way of life. Training large deep learning models for Natural Language Processing tasks can have enormous carbon footprints so we avoid using such models where possible, and use computing resources responsibly and efficiently.

Are you interested in working with us?