top of page
out_edited.jpg

TECHNOLOGY
 

INTUITIVE, PERCEPTIVE & VERSATILE

PLATFORM

Pixxon is built on the basement of 3 strong algorithms powered by Machine Learning, Analytics and AI:

Lidar3DObjectDetectionUsingPointPillarsExample_02.png

OBJECT TRACKING

​

Object tracking is a computer vision-based technique used to follow the movement of objects in videos in real time.

The foundational principle involves continuously estimating the location and pose of an object in a video stream and tracking multiple objects at the same time.


Pixxons Object Tracking algorithms are based on machine learning techniques and have been trained to identify and track objects. And uses AI for data association techniques to e algorithm has been tuned to provide for accuracy, precision, and recall, and deliver results with surety.

OBJECT DETECTION

​

Object Detection is a primary

computer vision technique that can locate instances of objects in videos. The program identifies and locates all objects in a video, and classifies each object into a predefined set of categories.

 

The Pixxon Object Detection algorithm has its foundations in machine learning techniques from which it has been trained to identify objects from hundreds of hours of footage. The program then leverages AI to detect a wide variety of object categories, such as pedestrians, cars, buildings, and animals from a feed. 

pexels-pavel-danilyuk-8438879.jpg
1_oiUAfWID66nsr0_2ySL1aA.gif

OUTPUT FOR ACTION

​

Output for Action is the concept of using computer vision to combine multiple images into a single, larger image. It is often used to create panoramic images, but can also be used to create a mosaic or composite image from multiple images taken from different viewpoints. With Pixxon, the process involves several steps, including image registration, alignment, and blending. Image registration aligns the images to be stitched based on common features or points of interest. The relative orientation and position of the images is estimated so that they can be combined seamlessly.

FRS (OD/OT)
​

Facial Recognition Solutions are derived from Object Detection and Tracking. It uses various algorithms to identify and verify people's identities based on their facial features. The solution typically works by extracting facial features from an image or video and comparing them to a database of known faces to identify a person. And has been tested and validated on a database of 1 million faces.

 

The FRS has been trained to recognize a wide variety of facial features, including the shape of the face, the distance between the eyes, and the texture of the skin, with matching and recognition enabled.

​

Capabilities:

                       

  • Millisecond turnaround time

  • Search and matching done automatically

  • Tracks people across movement 

  • Logs face detected in the Database

​

image_2023-06-13_125425223.png
big-people-crowd-on-black-background-vector-17506718.png

High Density Crowd Dynamics:

​

Crowd detection technology is designed to automatically detect and track the movement of people in a crowd. This technology is typically used in a variety of settings, including public spaces, transportation hubs, and event venues, to monitor and manage the flow of people in order to ensure safety and security. The system has been built leveraging computer vision algorithms, sensor networks, and machine learning techniques. These systems can detect the presence and movement of people in real-time, and can be configured to trigger alarms or other alerts when certain thresholds or patterns of behavior are detected.

​


Detect, monitor, regulate and parse through high density crowded areas in real time with analytics and reporting.

​​

  • Live crowd count

  • Analyze specific areas in the crowd

  • Footfall detection 

  • Real time data count and structuring

    

​

Image Stitching:

 

Image stitching by definition is the process of combining multiple images to create a single, seamless image. Using this technique, the engine can build cohesive and comprehensive 3d renders of a subject using images taken with a single camera or across multiple cameras. The software typically uses image registration and feature matching algorithms to align the images and blend them together. It includes specialized features for correcting distortion, vignetting, and other issues that can arise when combining images.

​

State of the Art image building and generation based on snippets and captures from multiple, scattered sources and cameras, with precision outputs.

 

  • 360 Degree view of source images

  • Cohesive image output

​

multi_band_52.png
pic03.jpg

Video Analytics:

​

The Video Analytics engine/software uses artificial intelligence to analyze video data and extract useful insights and information from a series of CCTV or other recorded footage. The technology has applications in surveillance systems, where it can help to identify suspicious activity, track the movements of individuals, and alert security personnel to potential threats. It can also be used in a variety of other applications, such as traffic management, customer behavior analysis, and quality control. The software works by analyzing the audiovisual data using algorithms, in real-time, looking for patterns, anomalies, and other features that might be of interest. There are many different techniques and approaches used in video analytics.

​
Visceral image and video analytics on recorded footage with quick and accurate output generation using tracking and image stitching modules.
​
  • Forensic analysis
  • Scans large volumes in quick time 

Fire and Smoke Detection:

​

Fire and smoke detection software uses video analytics technology to detect the presence of fire and smoke in a given area. The basic principle behind the software is to analyze video data from cameras and other video-recording devices in order to identify the presence of fire or smoke. The CCTV systems/video recording devices will be ameliorated with EDGE and IoT systems.  The analytics is done using footage from surveillance systems, particularly in buildings and other structures where the risk of fire is significant. The visual and/or thermal data captured by the cameras, looking for patterns, shapes, and other characteristics that are indicative of fire or smoke. The systems can also be tuned to leverage audio analysis to detect the sound of alarms. Once a fire or smoke event is detected, the software alerts security personnel, allowing them to take appropriate action.

​

Instant detection of fire, smoke and imminent danger using IoT/Edge devices equipped with CCTV cameras.

 

  • Highly secure and accurate​

  • Instant Alerts

  • Calls for help immediately

​

f7c4abce-4757-4852-ae48-0f288ae8d510.png
e57cdb29-22bf-49e5-af53-4305937895c3.png

Personal Protective Equipment Tracking:

​

Personal protective equipment (PPE) tracking is a system, or tool, that helps manage and track the use of personal protective equipment by workers. PPE is equipment or clothing that is worn by workers to protect them from hazards or risks in the workplace, such as chemical spills, exposure to hazardous materials, or the risk of injury. The PPE tracking technology is a combination of video analytics software systems, IoT/mobile device applications, and RFID (radio-frequency identification) tags. It is used to help ensure safety compliance i.e. that workers have the appropriate PPE for the tasks they are performing, and that they are using it properly.

​

Detect and track protective equipment on people at hazardous sites and workplaces to ensure safety and regulatory compliance with detection score.

​

  • Detects missing equipment on still and moving objects

  • Real Time alters and data input

  • Quality Compliance

Traffic Analytics:

​

The Traffic analytics tool uses ML and AI algorithms to analyze traffic data in order to understand and improve the flow of traffic, track RTA’s and ensure motor vehicle rule compliance. The tool can be used by transportation agencies, city planners, and other organizations to improve the efficiency and safety of transportation networks. The tool captures data from integrated software systems, sensors, and cameras. The collected data is parsed on a variety of traffic-related metrics, such as the volume of vehicles on a road, the speed at which they are traveling, the patterns of traffic flow and rule compliance. The data collected is analyzed using statistical analysis, machine learning algorithms, and other tools, in order to extract useful insights and information, depending on the application of the tool.

 

High speed tracking and analytics of live traffic feeds for rules and regulations compliance.​

​

  • Helmet and Triples Tracking

  • Real time snapshot stored in the database

  • Tracks across cameras

​

Screenshot_2023-06-13_at_2.50.04_PM-removebg-preview.png
bottom of page