building semantic driven apps & games
engaging with human like behaviors
processing databases use cases
providing new ways to live music
using Automatic Speech Recognition with
Google (c) Cloud Speech, Apple (c) SpeechKit APIs
or Sphinx open source project
using Text to Speech with (c) Acapela TTS SDK
or Voxygen (c) baratinoo SDK
using Natural Language Processing with
L&J (c) "skill engine" embedded technology
C/C++ & objectiveC SDK for android & iOS platforms
Natural language software agents
using Human Behaviors Simulation with
L&J (c) "motion engine" embedded technology
C/C++ & objectiveC SDK for android & iOS
Automatic real-time animations & synchronizations of video, 2D and 3D character formats
using Apple (c) SceneKit & ARKit APIs
using Machine Learning with Word embedding technology (Word 2 Vec)
using Deep Learning with Google (c) TensorFlow & SyntaxNet libraries
using Deep Learning with DLIB open source project
with L&J real-time music signal processing algorithms
Real-time beat tracking & loop extracting
Music signal analysis (energy & audio linked sequences)
using connected objects as device sensors, lightshow systems & wearable devices
On beat real-time sync of lightshow, running pace or graphics animation
L&J was founded thanks to a large background of innovative technologies such as Virtual Reality, Augmented Reality, Embedded Multimedia
or Voice over IP.
The team focuses on new experiences for Wearables, Smartphones, Tablets and connected TV ecosystems, especially on Google & Apple operating systems.
L&J's work aims to create new apps with our show cases, and client based apps under specifications.
We imagine new concepts and make them real. We create 2D/3D & video characters, 3D animations, video movies and audio design, using standard creation & post production tools.
We are experts on C/C++ & objectiveC development. Architecture, real-time processing, 2D/3D graphics and audio/video. We are experts on Matlab modelization and simulation.