Various research video demos with links to available open access manuscripts, open source software and datasets.

Real-Time Monocular Depth Estimation

Issue: Synthetic images captured from a graphically-rendered virtual environment primarily designed for gaming can be employed to train a monocular depth estimation model. However, this will not generalize well to real-world images as the supervised model easily overfits to local features present within the training domain.

Approach: 1) train a primary model to estimate monocular depth based on synthetic images. 2) use a secondary model to transform real-world images to the synthetic style before their depth is estimated.

Application: At run-time only requires two forward passes required during inference – once through the style transfer network and once through the depth estimation model.

Our approach produces superior qualitative (sharper) and quantitative (lower error) results compared to the contemporary state-of-the-art.

1 result

2018

[abarghouei18monocular] Real-Time Monocular Depth Estimation using Synthetic Data with Domain Adaptation via Image Style Transfer (A. Atapour-Abarghouei, T.P. Breckon), In Proc. Computer Vision and Pattern Recognition, IEEE/CVF, pp. 2800-2810, 2018.Keywords: monocular depth, generative adversarial network, GAN, depth map, disparity, depth from single image, style transfer. [bibtex] [pdf] [doi] [demo] [software] [poster]

Brain-Computer Interface for Real-time Humanoid Robot Navigation

Issue: variable position and size SSVEP stimuli for real-time teleoperation BCI application.

Approach: Variable position and size SSVEP stimuli, based on real-time object detection pixel regions, within the live video stream from a teleoperated humanoid robot traversing a natural environment. CNN architecture for scene object detection and dry-EEG bio-signal decoding.

Application: Demonstrable real-time BCI teleoperation of a humanoid robot, based on the use of naturally occurring in-scene stimuli.

Successful use of a novel variable SSVEP BCI (varying: pixel pattern + region size,/shape).

CNN based real-time decoding of dry-EEG bio-signals for interactive BCI applications.

1 result

2019

[aznan19navigation] Using Variable Natural Environment Brain-Computer Interface Stimuli for Real-time Humanoid Robot Navigation (N.K.N. Aznan, J. Connolly, N. Al Moubayed, T.P. Breckon), In Proc. Int. Conf. Robotics and Automation, IEEE, pp. 4889-4895, 2019.Keywords: ssvep, brain computer interface, bci, cnn, neural networks, convolutional neural networks, deep learning, dry-eeg, robot guidance. [bibtex] [pdf] [doi] [arxiv] [demo] [poster]

Prohibited Item Detection in 3D Computed Tomography Baggage Security Imagery

Issue: X-ray Computed Tomography (CT) based 3D imaging is widely used in airports for aviation security screening whilst prior work on prohibited item detection focuses primarily on 2D X-ray imagery.

Approach: we aim to evaluate the possibility of extending the automatic prohibited item detection from 2D X-ray imagery to volumetric 3D CT baggage security screening imagery.

Application: we take advantage of 3D Convolutional Neural Networks (CNN) and popular object detection frameworks such as RetinaNet and Faster R-CNN in our work.

The results of our experiments demonstrate that 3D CNN models can achieve comparable performance (∼98% true positive rate and ∼1.5% false positive rate) to traditional methods but require significantly less time for inference (0.014s per volume).

2 results

2020

[wang20multiclass-ct3d] Multi-Class 3D Object Detection Within Volumetric 3D Computed Tomography Baggage Security Screening Imagery (Q. Wang, N. Bhowmik, T.P. Breckon), In Proc. Int. Conf. Machine Learning Applications, IEEE, pp. 13-18, 2020.Keywords: luggage security, 3D CNN, 3D object detection, volumetric object detection, baggage threat detection, prohibited item detection, ATR, airport security, transport security, CT object recognition. [bibtex] [pdf] [doi] [arxiv] [demo] [talk]
[wang20ct3d] On the Evaluation of Prohibited Item Classification and Detection in Volumetric 3D Computed Tomography Baggage Security Screening Imagery (Q. Wang, N. Bhowmik, T.P. Breckon), In Proc. International Joint Conference on Neural Networks, IEEE, pp. 1-8, 2020.Keywords: luggage security, 3D CNN, 3D object detection, volumetric object detection, baggage threat detection, prohibited item detection, ATR, airport security, transport security, CT object recognition. [bibtex] [pdf] [doi] [arxiv] [demo] [talk]

Real-time Vehicle Detection and Tracking in Thermal Imagery

Issue: Real-time classification of vehicles into sub-category types poses a significant challenge within infra-red imagery due to the high levels of intra-class variation in thermal vehicle signatures caused by aspects of design, current operating duration and ambient thermal conditions.

Approach: We investigate the accuracy of a number of real-time object classification approaches for this task within the wider context of an existing initial object detection and tracking frame-work.

Application: Based on photogrammetric estimation of target position, we then illustrate the use of regular Kalman filter based tracking operating on actual 3D vehicle trajectories.

Results are presented using a conventional thermal-band infra-red (IR) sensor arrangement where targets are tracked over a range of evaluation scenarios.

1 result

2016

[kundegorski16vehicle] Real-time Classification of Vehicle Types within Infra-red Imagery (M.E. Kundegorski, S. Akcay, G. Payen de La Garanderie, T.P. Breckon), In Proc. SPIE Optics and Photonics for Counterterrorism, Crime Fighting and Defence, SPIE, Volume 9995, pp. 1-16, 2016.Keywords: vehicle sub-category classification, thermal target tracking, bag of visual words, histogram of oriented gradient, convolutional neural network, sensor networks, passive target positioning, vehicle localization. [bibtex] [pdf] [doi] [demo]