Michal Koperski

Deep Learning | Computer Vision engineer | Co-Founder


About me

I am a co-founder and senior ML engineer at NeuralSpike. I specialize in solving various computer vision problems such as recognition, detection, and segmentation. I played a key role in building a complete machine learning pipeline including data acquisition, model training, model maintenance, and model deployment. I have experience in model deployement on embedded devices, also for real-time applications.

I was a Post-doc at INRIA ( STARS). My research focused on: action recognition, action detection, codebook representation, multimodal representation (RGB + Depth), spatio-temporal attention, deep learning.

My PhD thesis was supervised by Francois Bremond, the thesis was a part of Toyota Partner Robot project. In this project, we focused on assessing people’s cognitive and behavioral status, based on daily-living activities monitoring.

During my PhD I was a research intern at Disney Research Pittsburgh, I worked on codebook representations for People Re-Identification.


  • Computer Vision
  • Action Recognition/Detection
  • Person Re-Identification
  • Machine Learning
  • Deep Learning


  • PhD in Computing Science (Computer Vision), 2014-2017

    INRIA, University of Nice Sophia-Antipolis

  • MSc in Computing Scienc, 2007-2009

    Poznan University of Technology

  • BSc in Computing Science, 2004-2008

    Poznan University of Technology


From Dec 2023 I am a co-founder and Senior ML Engineer at NeuralSpikeAI:

  • I am responsible for building relations with clients,
  • I am managing ML engineers team,
  • I am involved in training Deep Learning models with focus on edge devices,
  • I supervise the whole model preparation process from data acquisition to model deployement and monitoring.

From Aug 2018 until Nov 2023 I was a Senior ML Engineer at Tooploox:

  • I was working on deep neural networks models for object recognition, detection on images and videos,
  • I was advising junior scientists,
  • I was working on a research project on incremental and multimodal learning (check our new publication),
  • I was working on code transfer to our industrial partners.

From Aug 2020 until Nov 2022 I was a Senior Machine Learning Engineer at Mmhmm:

  • I was a lead Machine Learning engineer,
  • I worked on deep neural networks for semantic segmentation,
  • I developed models that are ready for real-time applications.

From Jan 2019 until Jun 2020 I was a Machine Learning Scpecialist at June:

  • I was a lead Machine Learning engineer,
  • I was responsible for the design of the whole ML pipeline including data acquisition, model training, model maintenance, and model deployment to an embedded device,
  • My team managed to significantly improve model accuracy,
  • I was also responsible for proposing new product features and discussing their implementation with the business.

From Dec 2017 until Jul 2018 I was a Post-doc at INRIA ( STARS).

  • I worked on deep neural networks with spatio-temporal attention for action recognition,
  • I worked on real-time action detection,
  • I worked on technology transfer to industrial partners
  • I published 6 scientific papers.

From Feb 2014 until Nov 2017 I was PhD Student at INRIA ( STARS). During my PhD I:

  • I proposed, implemented and validated several methods for action recognition and action detection,
  • I published 9 scientific papers,
  • The code of the proposed methods was successfully transferred to Toyota.

From Jan 2016 until Apr 2016 I was a Research Intern at Disney Research Pittsburgh. I was working on codebook representation for People Re-Identification. During my internship:

  • I published 1 scientific paper,
  • 1 patent request was filed.

From Apr 2013 until Sep 2013 I was Research Intern at INRIA ( STARS). I was working on multimodal RGB-D representation for action recognition. During my internship I:

  • proposed, implemented and validated new RGB-D descriptor for action recognition,
  • I published 1 scientific paper.

From Apr 2010 until Apr 2012 I was Research Engineer at Poznan University of Technology ( IDSS). I was working on automatic bird species detection, using machine learning. During the project I:

  • implemented and validated bird species detection system for mobile devices (ARM).
  • I published 1 book chapter

PhD Thesis

Download Abstract

On November 9th, 2017, I defended my Ph.D. thesis in Computer Science (Computer Vision) from the INRIA and from the University of Nice Sophia-Antipolis, France. The topic of my Ph.D. was “Human Action Recognition in Videos with Local Representation”. All the research involved in my thesis was conducted at INRIA under the supervision of Francois Bremond.

The jury of my Ph.D. defense was:

  • Frederic Precioso, Professor, University of Nice Sophia-Antipolis, president,
  • Matthieu Cord, Professor, Pierre and Marie Curie University, reviewer,
  • Leonid Sigal, Professor, University of British Columbia, reviewer,
  • Jean-Marc Odobez, Professor, IDIAP, examinator,
  • Francois Bremond, Research Director, INRIA (STARS), advisor.


Quickly discover relevant content by filtering publications.

Plugin Networks for Inference under Partial Evidence

In WACV. (2019).

PDF Cite

Toyota Smarthome: Real-World Activities of Daily Living

In ICCV. (2019).

PDF Project

A New Hybrid Architecture for Human Activity Recognition from RGB-D videos

In MMM. (2019).

PDF Cite

Spatio-Temporal Grids for Daily Living Action Recognition

In ICVGIP. (2018).

PDF Cite

Online temporal detection of daily-living human activities in long untrimmed video streams

In IPAS. (2018).

PDF Cite

PRAXIS: Towards automatic cognitive assessment using gesture recognition

In Expert Systems with Applications. (2018).

PDF Cite

Deep-Temporal LSTM for Daily Living Action Recognition

In AVSS. (2018).

PDF Cite

Action Recognition based on a mixture of RGB and Depth based skeleton

In AVSS. (2017).

PDF Cite

Groups Re-identification with Temporal Context

In ICMR. (2017).

PDF Cite

Online Recognition of Daily Activities by Color-Depth Sensing and Knowledge Models

In Sensors. (2017).

PDF Cite


The demo shows real-time action detection framework for daily-living patient monitoring.