Structure As Sensors

The core of my research is on the idea of using the structure as a sensor to indirectly monitor the environment outside and inside the structure. Sensing and monitoring the inside and outside environment is an important step toward smart cities and urban infrastructures. Considering a common commercial or residential building as the “structure”, some example applications will include: 1) monitoring the occupants inside the building which enables efficient energy management, smart healthcare, etc., 2) monitoring the car traffic outside the building for smart and adaptive traffic control, 3) monitoring the facilities and equipment inside the building (e.g., piping, HVAC, and fridge) for damage detection and adaptive control, and 4) monitoring the structure and substructure (foundation and the ground underneath the building) for structural health monitoring. Conventional direct monitoring approaches utilize various dedicated sensors. For example, they might use wearables for occupant monitoring, cameras for traffic monitoring, thermometers, electrical current sensors, and flowmeters For facility monitoring, and vibration sensors for monitoring the structure. As is clear in this simple example, these approaches require many sensors which in turn results in large installment and maintenance requirements. On the other hand, SAS shares sensors for various sensing tasks which significantly reduces the sensing requirements. The main intuition behind SAS is that these different sensing targets cause a similar physical phenomena in the structure. In the prior example, human activities (such as walking, vacuuming, cooking, etc.), movements of cars and trucks outside the building, and flow of water in the pipes all cause vibrations in the structure. Therefore, we can use a shared set of vibration sensors to sense and monitor all of these sensing targets and that will reduce the sensing requirements drastically.

"Building" as a Sensor for Non-Intrusive Occupant Monitoring

Occupant monitoring is important in many smart building applications such as smart healthcare and efficient energy management. Monitoring occupants in indoor settings involves tracking their information such as presence, location, identity, and health status. Some of the current sensing approaches for occupant monitoring include vision-based, RF-based, pressure-based, and mobile-based sensing approaches. In practice, the application of these approaches are limited due to privacy concerns, dense deployment requirements and the fact that they require the occupants to carry a device. To overcome these limitations, we have introduced sensing and monitoring occupants through their footstep-induced floor vibration.


The wave induced by the footstep-induced vibration reaches different sensors at different times. These Time-Differences-of-Arrival (TDoA) can then be used for localizing the footsteps through the hyperbolic positioning. However, wave propagation in the floors is dispersive (i.e., different frequency components has different propagation velocity). This dispersion effect causes signal distortion which significantly descreases the accuracy of TDoA estimation and occupant localization. Furthermore, the wave propagation velocities depend on the structural characteristics and differ greatly across different locations of the floor and also across different buildings due to structural heterogeneity. Estimating these propagation velocities significantly adds to the calibration requirement. This project is focused on solving these two challenges. We have introduced 1) a decomposition-based approach to mitigate the dispersion effect and 2) a novel multilateration approach to eliminate the requirement to know the wave propagation velocity in advance.

Localization based on footstep-induced floor vibrations holds the promise of being non-intrusive and accurate in open areas with no obstructions (e.g., walls, furniture, etc.). However, in real life buildings with various types of obstruction, this approach often requires high density of sensors to ensure an unobstructed path between the footsteps and the sensors and achieve accurate measurements. Specifically, we have observed that the obstruction mass is one of the main factors affecting the wave propagation velocity which, in turn, causes inaccurate localization for conventional unob-structed localization approach. In this project, we aim to characterize the effect of obstruction mass on the propagation velocity and use such characterization to enable accurate occupant localization in obstructed indoor settings.

The objective of footstep modelling is to distinguish vibrations caused by the footsteps from the ones caused by other impulsive excitations. Conventionally, a set of labelled data (footsteps and non-footsteps) are used to train a footstep model through supervised learning. However, the vibration responses are affected by the underlying structure. Therefore, the footstep models trained in one structure do not transfer and will be unable to distinguish the footsteps and non-footsteps in other structures. Thus, labelled data is required in different structures (and different locations in the same structure) which is costly and time-consuming to acquire. To address this challenge, we introduce a model transfer approach which first models the structural effect and then finds a projected space in which the structural effect on the data is minimized. In the projected space, the models are similar across structures and hence transfer well. Our model transfer approach does not require labelled data in every structure and hence reduces the labelled data requirement.


The footstep and gait of occupants are unique. Hence, vibration-based approaches has the potential to distinguish and identify the occupants. However, the challenge for these methods is that the signals are sensitive to the gait variations caused by different walking speeds and the floor variations caused by structural heterogeneity. To address this challenge, we utilize the physical insight on how individual step signal changes with walking speeds and introduce an iterative transductive learning algorithm (ITSVM) to achieve robust classification with limited labeled training data.

Gait balance is an important factor in tracking medical conditions. To track the gait balance, we aim to estiamte the foostep force using the footstep-induced floor vibration. The footstep force affects the energy of the vibration signal. For example, in general, a footstep with higher force results in a vibration signal with larger amplitude. So, in theory, we can estimate the footstep force based on the energy of the signal. However, due to the wave attenuation, the energy of the vibration signal is also affected by the distance between the sensor and the footsteps (lower energy for sensors which are far from the footstep and vice versa.) Further, the attenuation function depends on the structure. We overcome this challenge through footstep localization and incorporating structural factors into an analytical force-energy-distance function.

"Common Surfaces" as a Sensor for Human-Computer Interaction

Touch surfaces are intuitive interfaces for computing devices. Most of the traditional touch interfaces (vision, IR, capacitive, etc.) have mounting requirements, resulting in specialized touch surfaces limited by their size, cost, and mobility. More recent work has shown that vibration-based touch sensing techniques can localize taps/knocks, which provides a low-cost flexible alternative. These surfaces are envisioned as intuitive inputs for applications such as interactive meeting tables, smart kitchen appliance control, etc. However, due to dispersive and reflective properties of various vibrating mediums, it is difficult to localize taps accurately on ubiquitous surfaces. Furthermore, no work has been done on tracking continuous swipe interactions through vibration sensing. In this project, we aim to introduce a vibration-based interaction tracking system for multiple surface types.

Toward Lower Labeled Data Requirement via Online Active Learning

Deep learning models have been successfully used in medical image analysis problems but they require a large amount of labeled images to obtain good performance. However, such large labeled datasets are costly to acquire. Active learning techniques can be used to minimize the number of required training labels while maximizing the model’s performance. The objective of this paper is to develop a deep-learning-based online active learning approach which has comaparable performance and lower labeled data requirements compare to the baseline approaches.