We started Mantis with a clear idea of what sort of company we wanted it to be, and the sort of work that we wanted to do.
Each problem is unique and there’s no one-size-fits-all solution.
At Mantis, we’ll have a call with you to understand the problems you’re facing. We’ll then match your problems up with how we can use Natural Language Processing (NLP) to provide the best solution.
This might involve, for example, looking at a sample of your text data to have a more accurate understanding of which Natural Language Processing techniques would work most effectively. Or it might involve working with you to improve your existing NLP techniques around data so your data is in the best shape for future use.
We’ll then recommend options for moving forward and what the estimated costs and timeline would be.
Read
We will review the relevant literature for inspiration on the latest and most stable Natural Language Processing techniques to solve the problem.
Implement
We design an algorithm (also known as a model) and implement a training process for the model to learn - we use this process to either create your first model or help improve an existing model’s performance
Deploy
We deploy the model onto your infrastructure of choice, such as a cloud provider as a Docker container, or a standalone software package.
Monitor
We set up monitoring for the model’s performance to spot dips in performance early and work out how and when to optimise and recalibrate your model, which keeps the model’s response accurate and effective.
An NLP model is not perfect when it is first developed. A model improves through experimentation and recalibration after its initial development. Through this continued fine-tuning, a model can better solve the problem it’s been designed for.
The baseline model is not perfect, but it’s a first attempt that gives us reasonable results for a task, as well as direction for where to apply more complex solutions.
Transitioning from a baseline model to an advanced model needs experimentation with different NLP techniques, parameters and ways of approaching the problem, which all combine to improve a model’s performance.
If a model needs to run on a different infrastructure or device (for example a mobile phone), or needs to interact with the user while running in real-time (for example an AI-powered chatbot), it needs to be optimised.
Matt is an environmental scientist by background with a PhD in forestry. He left academia in 2015 and has worked as a data scientist in a number of UK Government departments, startups, and NGOs. Matt has experience of working in the Legal, Health, and Government sectors, where his work focussed on developing reproducible and transparent ways of working and natural language problems.
Nick has studied computer science at Imperial College where his thesis was about detecting lies from video. He has been working as a data scientist for more than 8 years in various startups and lately at the Wellcome Trust. Since his transition to the industry he switched focus from Computer vision to NLP and has experienced working with datasets from a few hundreds to millions of examples and thousands of classes. He has a strong interest in reproducible, reusable and open source code.