Mor Consulting Ltd. is an A.I. focused consultancy offering strategic research and development owned by Ian Ozsvald, based in London (UK).

Back to top
  • Machine Learning for Data Science
  • Natural Language Processing
  • Social Graph Analysis
  • High Performance Computing
  • Big Data and Parallel Computing
  • Data Visualisation
  • Image Recognition
  • NVIDIA CUDA, OpenCL and OpenMP
  • Numerical simulations
  • Speech Recognition
  • Robotic Control
  • Python, C++, MatLab, Arduino, Mobile
Back to top

We take on full or part-time positions to solve interesting problems. Some of our recent work is listed below, feel free to get in touch if you think that we can help.

UPDATE - Mor is now ModelInsight(2015+)

Mor's founder is now a founder in the larger Data Science Agency ModelInsight specialises in Machine Learning, Data Science and Natural Language Processing covering strategy, data review and implementation. We also coach existing data science teams.

Data Science Jobs in London Email List(2016)

To help London Data Scientists find the right job we run a jobs list, jobs are listed after being vetted (no spam).

Data Science Delivered(2015)

Founder Ian Ozsvald has written a collection of notes and Jupyter Notebooks on shipping Data Science Products.

Text and image annotation(2013)

We're specifying a text and image annoation web service (tentatively named which spots brand mentions and accurately disambiguates names (e.g. finding Apple-the-company and ignoring apple-the-fruit) and detects brand logos (e.g. spotting the Starbucks logo in an Instagram photo). Please contact us if you're interested in learning more, contact details in the footer. Some open-source experiments are available on Ian's blog. Tools include Python, scikit-learn, scipy, NLTK, tesseract.

Data Science for Transport Logistics(2013)

We've worked with CityMapper to analyse their transport data to model live train movements to make commuting in London better. Tools in use include NetworkX, Neo4J, Gephi and MongoDB.

Twitter brand analysis for YouGov (2012 - 2013)

Working with AdaptiveLab for YouGov we're tagging Twitter and Facebook posts using Named Entity Recognition to mark-up brand discussions with sentiment analysis for brand reach analysis. Processing 3 million tweets per day requires a carefully designed multi-computer pipeline, along with robust reporting of the state of the pipelines and the cluster. Ian taught topics related to this at PyCon 2013 in his Applied Parallel Computing tutorial. Tools used include Elastic Map Reduce, MrJob, Redis, twitter-text-python.

Twitter event analysis and geo-location experiments(2013)

Using tweet data at conferences (PyCon 2013, PyData 2013) Ian experimented with building topic maps (using hashtags, usernames, noun phrases) to understand what was being discussed at a conference. This gives a "1000 metre overview". This analysis continued with plotting geo-located tweets to define areas of popularity in London, Brighton and the UK.

Depth mapping using Kinect and Linux(2012)

For a computer vision prototype for the Chilean mining industry we experimented with Kinect devices using Python to generate 3D depth maps and volume estimates of rocks.

Parallel Python and High Performance Python tutorials (2011,2012)

Ian has taught a mix of scientific python tutorials in recent years:

Applied Parallel Computing at PyCon 2013 extends the previous tutorials with a focus on parallel computing using clusters (map/reduce using Disco, redis pipelines, parallelpython, native Python parallelistation) along with a look at 'lessons learned' in previous commercial engagements.

Parallel Python at EuroSciPy 2012 covers 5 ways to move towards parallel python across a local and cloud based network including IPython-parallel and PiCloud, includes src and slides.

High Performance Python 1 at PyCon 2012 covers 9 approaches to making a pure Python program faster including profiling, numpy and compiling to C with ShedSkin and Cython, includes src and slides and video.

High Performance Python at EuroPython 2012 covers 10 approaches to making pure Python faster (as for PyCon tutorial) from profiling through to using multi-core, multi-machine and GPU solutions with pyCUDA, includes src and slides and video.

OpenPlants real-world Optical Character Matching iPhone App (2012)

Working with Kasabi (of Talis) we created the OpenPlants plant-label matcher for use at the Royal Botanic Gardens, Kew, London which allows iPhone users to photograph a plant label and get Wikipedia information back in return. This removes the need to add a QR code to the 10,000 plant labels at Kew Gardens.

StrongSteam AI and data mining API for web and mobile developers (2011 - 2012, defunct now)

Our (now dead) API exposes many of our tools for use by non-AI developers over a simple web interface. The goal is to let web and mobile developers develop powerful algorithms without having to go through the time consuming process of learning new libraries (along with compiling, supporting and integrating them!), backed by a scaleable web service. Follow us on Twitter via @StrongSteamAPI.

See these example videos showing the first API for noisy OCR (optical character recognition) matching which removes the needs for QR Codes.

High Performance Python Tutorial and eBook

At EuroPython 2011 and PyCon 2012 Ian has taught High Performance Python, this material has been published as a free eBook and may be turned into a larger eBook later. Source, tutorial video from PyCon, slides etc available via the link.

Product matching researching for InvisibleHand (2011)

We've been working with InvisibleHand to analyse how humans determine if e-commerce pages represent the same product so we can add human-like reasoning to their superb price comparison toolbar.

SocialTies social discoverability mobile app (2010 - Present)

We've powered the artificial intelligence for text mining behind the social discovery app SocialTies, the app runs on smartphones to help you discover interesting people at events.

Working with RadicalRobot we've created a mobile application powered by a Python web API that constantly models everyone it has learned about, backed by a model of 'conversation space' and a social graph, to efficiently help conference goers meet interesting people.

Model Optimisation for reverse-engineered physics simulations(2006 - Present)

Professor Paul Fewster's Research Group at PANalytical work with X-Ray Diffraction techniques to analyse complex fabricated structures such as next-generation blue LEDs and laser diodes. Physics simulations are coupled with optimisation algorithms to automatically fit proposed structure models to the actual results that are measured using X-Ray spectroscopy equipment.

Example: The optimisation techniques were improved so that complex fitting problems were solved more quickly and more reliably. Problems faced included long computation times for complex simulations and non-obvious fitness landscapes (since the problems have high dimensionality). Approaches in use include pyCUDA, NVIDIA CUDA with Visual Studio, algorithmic design and optimisation.

Social graph visualisation (2010)

As an experiment we visualised Lanyrd's social connectivity graph using and Twitter. The JavaScript visualisation runs in browsers and on mobile devices, it lets people see who they know at conferences and who is well connected. The code is open source.

A.I. consultation with Applied Machine Intelligence (2010 - Present)

Ian is working with Applied Machine Intelligence to develop low cost robotic systems for the masses. Topics include machine vision and models of human interaction.

Parallelisation of flood model simulations(2007 - Present)

With Ambiental We're assisting in the parallelisation of their flood modeling simulations - parallelisation is essential to take advantage of multi-core and multi-cpu resources so time intensive simulations take place quickly. Tools include OpenMP, Visual Studio and NVIDIA's CUDA. Platforms include 32 bit and 64 bit Windows and Linux solutions.

Audio analysis for ProCasts (2010)

ProCasts wanted to automated elements of their educational screencast production, we identified ways of detecting pauses to automatically cut samples into shorter files which were matched against the original text script. Tools used include text to speech, audio analysers and Python.

Artificial Intelligence consultant at are building intelligent virtual humans using technologies like face and voice recognition, natural language parsing and models of human behaviour. We advised on the creation of these agents.

Python interfacing with pyFlux for Cedrat's Flux(2008)

pyFlux allows a Python programmer to talk to the API inside Cedrat's Flux magnetic flux simulation package. pyFlux has scant documentation so requires an inquisitive mind to find the right API calls. For the client I have worked to automate repetitive flux simulation and analysis tasks.

Routing on Digital Elevation Maps(2006)

Using a Digital Elevation Map, the customer required a set of optimal routes for an oil pipeline across a mountainous area. Constraints placed upon the routing included maximum elevation limits, length of pipeline, preference to avoid changes in elevation and a desire to avoid unsafe areas.

These constraints represented a trade-off between cost and safety. The system would quickly generate a route between any two points, and these routes could have been used during a cost-benefit analysis for routing over a large terrain. Developed in conjunction with Dr. Justin Butler at Ambiental.

Natural language parsing for news summaries(2003 - 2004)

A large volume of news articles needed to be summarised quickly to bring breaking news events to the user's attention, preferably in real-time. After our involvement the demonstration system was extended into the ScoopJack news aggregation service and is also sold by Corpora.

The prototype was developed with Dr. Nick Jakobi during employment at Algorithmix Ltd. (the webiste used to be pre-2006). Algorithmix was bought by Corpora during 2004.

Model optimisation for large-scale logistics(1999 - 2004)

Example: A simulated model of an industrial waste collection operation was developed in conjunction with the industrial partner. Parameters to the model included the amount and types of waste to be collected, number and skills of each waste-truck driver and the location and capabilities of each waste-disposal facility.

An optimisation system was developed to optimise each parameter to produce robust, reliable work-plans for each driver, whilst respecting work-time constraints and final costs. The system has been in use since installation and produces more efficient plans than those generated by the unassisted human planner. Developed with Olivier Trullier and team whilst working with MASA Group.