Ashwin Ashok

Email: ashwinashok(AT)cmu(DOT)edu

Add: CIC 2127B, 4720 Forbes Ave Pittsburgh, PA 15213

ashwin

I will be starting as a tenure-track faculty in Dept. of Computer Science at Georgia State University from Fall'2016

I am looking for highly motivated graduate and undergraduate students to work on various facets of Mobile Systems research. Please email me if you are interested!


Research Interests :

My interests are broadly in mobile Cyber¨CPhysical Systems (CPS) research. Specific areas of research interest include mobile Internet-of-Things (IoT), Cloud and Distributed Computing, Mobile & Vehicular Networking and Computing, Visible Light (VLC) & Camera Communication, Computer Vision, Wearable and Low¨CPower Systems


Bio: I am currently a Postdoctoral Research Associate in ECE at Carnegie Mellon University under the mentorship of Prof. Peter Steenkiste (CMU) and in collaboration with Dr. Fan Bai (GM Research). I am affiliated to the CMU-GM Continuous and Autonomous Driving Collaborative Research Lab (CRL) at CMU where I am currently working on building cloud-computing systems for vehicular applications. I completed my Ph.D. in Oct'2014 from Wireless Information Network Lab (WINLAB) at Rutgers University where I worked under the guidance of Profs. Marco Gruteser, Narayan Mandayam and Kristin Dana. My doctoral thesis (slides) developed a novel inter-disciplinary concept called visual MIMO that explores the use of cameras and other optical arrays as receivers in a communication system. I have also interned at QualComm (NJ) for a summer working on a visible light communication based application. As a graduate teaching assistant for four semesters at Rutgers I have taught courses on Linear Systems, Probability and Random Processes and delivered guest lecture (title : Visible Light and Camera Communications) in Prof. Narayan Mandayam's Wireless Systems Design graduate course. I am an experimentalist and have keen interests in system design and prototyping. I love tinkering with circuits, Arduino and RaspberryPi.


Selected Services :

Co-chair for MobiCom 2016 Workshops (Workshops List)

Co-chair for Workshop of Wearable Systems and Applications - WearSys'2016 (Program Schedule)

Reviewer for Trans. on Mobile Computing (TMC), Trans. on Networking (ToN), Wireless Communication Magazine


Google Scholar Profile  LinkedIn Profile  Research Projects  ResearchGate Profile  Publications


 

Research Projects

 
Vehicular Cloud Computing

The advent of connectivity of vehicles to the Internet offers the possibility of offloading computation and data intensive tasks from the on-board unit (OBU) to remote cloud servers for efficient execution. We design a dynamic framework for bringing cloud-computing to vehicles where applications embedded in the vehicle OBU can benefit from remote execution of tasks provided as services in the cloud.

Links: MCS workshop at Mobicom'2015 [paper], NeTS NSF Workshop 2015 [poster]
 
Wearable Device Authentication using Head-Movements

We design a novel user-authentication system, dubbed Headbanger, for head-worn wearable devices by monitoring the user's unique head-movement patterns in response to an external audio stimulus. Solutions today primarily rely on indirect authentication mechanisms through the user's smartphone, which can be cumbersome and susceptible to adversary intrusions. Biometric solutions, on the other hand, are subject to the availability of the specific sensors in the wearable unit. The proposed head-movement based user authentication effectively addresses these concerns, providing an accurate, robust, light-weight and convenient solution.

Links: PerCom 2016 [paper], PerCom 2016 Demo [coming soon]
 
Screen-to-Camera Communication

The ubiquitous use of QR codes motivates to build novel camera communication applications where cameras can decode information from pervasive display screens such as billboards, TVs, monitor screens, etc. We are studying and exploring methods, inspired from communication and computer vision techniques, to communicate from display screens to off-the-shelf camera devices.

Links: INFOCOM'16 [paper], WACV'16 [paper] PerCom'14 [paper], WACV'12 [paper], CVPR'12 [poster], IEEE GlobalSIP'13 [paper]
 
Low-Power Radio-Optical based Positioning for Wearables

Estimating the position of nearby devices, accurately and with fine-resolution, is a hard problem, yet of importance to many context-aware applications; such as augmented reality, autonomous automotive systems, smart manufacturing systems, etc. We are exploring a technique that integrates wireless wearable devices with hardware adjuncts that can provide precise and highly accurate positioning of objects and people for indoor environments.

Links: TMC'16 [paper], VTC'15 [Invited paper], IPSN'15 [paper], Microsoft Indoor Localization Competition 2014 [link], Mobisys'13 demo [paper][demo video]
 
 
Capacitive Touch Communication - A Technique to Input Data Through Device's Touchscreens

As we are surrounded by an ever-larger variety of post-PC devices, the traditional methods for identifying and authenticating users have become cumbersome and time-consuming. This work presents a capacitive communication method through which a device can recognize who is interacting with it. It exploits the capacitive touchscreens, which are now used e.g. in laptops, phones, and tablets.

Links: Mobisys'12 demo [paper][Demo video][InterDigital Proposal Video]
 
 
Privacy Respecting Cameras

The ubiquity of cameras in today¡¯s world has played a key role in the growth of sensing technology and mobile computing. However, on the other hand, it has also raised serious concerns about privacy of people who are photographed, intentionally or unintentionally. We are exploring the use of near-visible/infrared light communication to design ¡°invisible light beacons" where privacy preferences of photographed users are communicated to cameras. Particularly, we explore a design where the beacon transmitters are worn by users on their eye-wear and transmit a privacy code through ON-OFF patterns of light beams from IR LEDs.

Links : VLCS/MobiCom'14 [paper]
 
Time-of-Flight Camera-based Communication

Time-of-Flight cameras or depth-sensing cameras like Microsoft's Kinect have become popular as these cameras can find "how far is the object from the camera" or depth-sense. We are exploring techniques to use such cameras for communication in lines of visual MIMO.

Links : ICCP'14 [paper]
 
LED-to-Camera Communication for Car-2-Car Communication

The inherent limitations in RF spectrum availability and susceptibility to interference make it difficult to meet the reliability required for automotive safety applications. Visual MIMO applied to vehicular communication proposes to reuse existing LED rear and headlights as transmitters and existing cameras (e.g. those used for parking assistance, rear-view cameras) as receivers. We designed a proof of concept prototype of a visual MIMO system consisting of an LED transmitter array and a high-speed camera. We also propose link layer techniques such as rate-adaptation that apply to such systems for adapting to visual channel distortions.

Links: Mobisys'11 [paper][WINLAB Poster], SECON'11 [paper][slides]
 
Visual Channel Models

Visual channels are highly characterized by visual distortions, and unlike RF MIMO channels where multipath and fading are more significant. We model a visual MIMO channel subject to perspective distortions, artifacts due to lens-blur, spatial interference from multiple light emitters, and synchronization mismatch between the transmitter and the camera. We borrow from computer vision theory and also propose techniques that apply to camera-based communication channels in general.

Links: Mobicom'10 [paper], [Rutgers Engineering Week Poster], CISS'11 [paper], ProCam'11 [paper]

 

Publications

NOTE: The material below is presented to allow timely dissemination of the work. Copyrights and all rights therein are retained by the copyright holders

2016 (in line for publication)

Ashwin Ashok, Chenren Xu, Tam Vu, Marco Gruteser, Yanyong Zhang, Rich Howard, Narayan Mandayam, Wenjia Yuan, Kristin Dana,What Am I Looking At? Low-Power Radio-Optical Beacons For In-View Recognition Using Smart-Glasses , IEEE Transactions on Mobile Computing (TMC), 2016

Sugang Li, Ashwin Ashok, Chenren Xu, Yanyong Zhang, Marco Gruteser, Janne Lindqvist, Marco Gruteser, Narayan Mandayam, Demo of HeadBanger: Authenticating Smart Wearable Devices Using Unique Head Movement Patterns , Invited DEMO at IEEE Conference on Pervasive Computing and Communications (PerCom), 2016

Sugang Li, Ashwin Ashok, Chenren Xu, Yanyong Zhang, Marco Gruteser, Janne Lindqvist, Marco Gruteser, Narayan Mandayam, Whose Move is it Anyway? Authenticating Smart Wearable Devices Using Unique Head Movement Patterns , Accepted to IEEE Conference on Pervasive Computing and Communications (PerCom), 2016

Viet Nguyen, Yaqin Tang, Ashwin Ashok, Marco Gruteser, Narayan Mandayam, Eric Wengrowski, Kristin Dana High-Rate Flicker-Free Screen-Camera Communication with Spatially Adaptive Embedding , Accepted to IEEE Conference on Computer Communications (INFOCOM), 2016

2015

Eric Wengrowski, Wenjia Yuan, Kristin Dana, Ashwin Ashok, Marco Gruteser, Narayan Mandayam, Optimal Radiometric Calibration for Camera-Display Communication, Accepted to IEEE Conference on the Applications of Computer Vision (WACV), 2016

Ashwin Ashok Emerging Cyber-Physical Systems: Vehicular Cloud Computing, NSF NeTs Early Career Workshop, Arlington, VA, 30-31 July-2015.

Ashwin Ashok, Peter Steenkiste, Fan Bai Enabling Vehicular Applications using Cloud Services through Adaptive Computation Offloading, Mobile Cloud Services Workshop, MobiCom, Sep-2015.

Ashwin Ashok, Chenren Xu, Tam Vu, Marco Gruteser, Rich Howard, Yanyong Zhang, Narayan Mandayam, Wenjia Yuan, Kristin Dana, Low-Power Radio-Optical Beacons for In-View Recognition , (Invited Paper : Emerging Technologies: Light-based Communications and Positioning track), IEEE Vehicular Technology Conference (VTC), Sep-2015, Boston, USA.

Dimitrios Lymberopoulos et.al., A Realistic Evaluation and Comparison of Indoor Location Technologies: Experiences and Lessons Learned, in IEEE/ACM International Conference on Information Processing in Sensor Networks (IPSN) 2015. (paper includes the Participants of Microsoft Indoor localization Competition 2014 as co-authors, I was the lead in Team 5)

2014

Ashwin Ashok, Shubham Jain, Marco Gruteser, Narayan Mandayam, Wenjia Yuan, and Kristin Dana, Capacity of Screen-Camera Communications Under Perspective Distortions, in Elsevier Pervasive and Mobile Computing Journal (PMC), Dec 2014.

Ashwin Ashok, Viet Nguyen, Marco Gruteser, Narayan Mandayam, Wenjia Yuan, and Kristin Dana, Do Not Share! Invisible Light Beacons for Signalling Preferences to Privacy-Respecting Cameras, in Proceedings of ACM MobiCom, VLCS Workshop, 2014.

Ashwin Ashok, Shubham Jain, Marco Gruteser, Narayan Mandayam, Wenjia Yuan, and Kristin Dana, Capacity of Pervasive Camera Based Communications Under Perspective Distortions, in Proceedings of IEEE Pervasive Computing and Communications (PerCom), 2014.

Wenjia Yuan, Kristin Dana, Rich Howard, Ashwin Ashok, Ramesh Raskar, Marco Gruteser, and Narayan Mandayam, Phase Messaging Method for Time-of-flight Cameras, in ICCP: Proceedings of IEEE International Conference on Computational Photography, 2014.

2013

Ashwin Ashok, Chenren Xu, Tam Vu, Marco Gruteser, Yanyong Zhang, Rich Howard, Narayan Mandayam, Wenjia Yuan, Kristin Dana,Demo: BiFocus - Using Radio-Optical Beacons for An Augmented Reality Search Application , Proceedings of ACM/USENIX International Conference on Mobile Systems, Applications, and Services (MobiSys), 2013

Wenjia Yuan, Kristin Dana, Ashwin Ashok, Marco Gruteser, Narayan Mandayam, Spatially Varying Radiometric Calibration for Camera-Display Messaging, IEEE Global Conference on Signal and Image Processing (GlobalSIP) Symposium on Mobile Imaging, Dec 2013

2012

Wenjia Yuan, Kristin Dana, Ashwin Ashok, Michael Varga, Marco Gruteser, Narayan Mandayam, Dynamic and Invisible Messaging for Visual MIMO, Proceedings of the IEEE Workshop on the Applications of Computer Vision (WACV), pp. 345-352, 2012

Wenjia Yuan, Kristin Dana, Ashwin Ashok, Michael Varga, Marco Gruteser, Narayan Mandayam, Photometric Modeling for Active Scenes, IEEE CVPR Workshop on Computational Cameras and Displays, Poster Presentation, 2012

Tam Vu, Ashwin Ashok, SignetRing: Distinguishing Users and Devices using Capacitive Touch Communication , InterDigital Innovation Challenge (I2C) finalist, 2012 (proposal available upon request)

Tam Vu, Ashwin Ashok, Akash Baid, Marco Gruteser, Richard Howard, Janne Lindqvist, Predrag Spasojevic, Jeffrey Walling Demo: User Identification and Authentication with Capacitive Touch Communication , Proceedings of ACM/USENIX International Conference on Mobile Systems, Applications, and Services (MobiSys), 2012

2011

Ashwin Ashok, Marco Gruteser, Narayan Mandayam, Kristin Dana, Characterizing Multiplexing and Diversity in Visual MIMO, Information Sciences and Systems (CISS), 2011 45th Annual Conference on, vol., no., pp. 1-6, 23-25 March 2011

Ashwin Ashok, Marco Gruteser, Narayan Mandayam, Ted Kwon, Wenjia Yuan, Michael Varga, Kristin Dana, Rate Adaptation in Visual MIMO , Proceedings of IEEE Conference on Sensor, Mesh and Ad Hoc Communications and Networks (SECON), pp. 583-591, 2011

Wenjia Yuan, Kristin Dana, Michael Varga, Ashwin Ashok, Marco Gruteser, Narayan Mandayam,Computer Vision Methods for Visual MIMO Optical System, Proceedings of the IEEE International Workshop on Projector-Camera Systems (held with CVPR), pp. 37-43, 2011

Michael Varga, Ashwin Ashok, Marco Gruteser, Narayan Mandayam, Wenjia Yuan, Kristin Dana,Demo: Visual MIMO-based LED-Camera Communication Applied to Automobile Safety , Proceedings of ACM/USENIX International Conference on Mobile Systems, Applications, and Services (MobiSys), pp. 383-384, 2011

2010

Ashwin Ashok, Michael Gruteser, Narayan Mandayam, Jayant Silva, Michael Varga, and Kristin Dana, Challenge: Mobile Optical Networks Through Visual MIMO, in MobiCom: Proceedings of the sixteenth annual international conference on Mobile computing and networking. New York, NY, USA: ACM, pp. 105-112, 2010.

News!

NSF Panel

Have been invited to serve as a panelist on a NSF program in 2016

VLCS TPC

Have been invited to serve on the TPC of Visible Light Communication Systems (VLCS) workshop to be held with MobiCom 2016

SIGGRAPH reviewer

Will be serving as a reviewer for ACM SIGGRAPH 2016

Paper accepted at TMC Journal (preprint)

Our work on Radio-Optical Beacons based Augmented Reality Glasses accepted for publication in IEEE Transaction on Mobile Computing journal

Papers accepted at PerCom and INFOCOM '2016

Papers on wearable device authentication, and flicker-free screen-camera communication using TextureCode accepted at PerCom and INFOCOM respectively

Co-chairing Mobicom Workshops 2016

Will be co-chairing Mobicom workshops with Prof. Robin Kravets (UIUC), to be held in New York City.

Invited to BIGCOM and MOBIMEDIA 2016 TPC

Will serve on the technical program committee of International Conference on Big Data Computing and Communications (BIGCOM) and MOBIMEDIA 2016. Consider submitting!.

Paper accepted at WACV'2016

Paper on radiometric calibration for screen-camera communication accepted at WACV'2016

NSF NeTS Early Career Investigators Workshop

Invited as a participant for NSF NeTS Early Career Investigators Workshop.

Paper accepted at MCS'2015

Recent work on Cloud computing for Vehicular applications accepted at MCS workshop@Mobicom'15

Invited paper at VTC-Boston

My work on low-power radio-optical tags has been accepted as an invited paper in the Emerging Technologies: Light-based Communications and Positioning track at VTC in Boston (Sep 2015)

WearSys 2015 (in conjunction with MobiSys)!!

I will be chairing the workshop on wearable systems and applications, with Dr. Jie Liu from MSR and Prof. Suman Banerjee from UWisc. Click here for more information. Consider submitting!

Indoor Localization competition results at IPSN 2015!

The participants and organizers of the Microsoft Indoor Localization Competition 2014 wrote a paper on the results and lessons learned, which has been accepted at IPSN 2015

Started post-doc@CMU Oct 2014!

Working on building a cloud-offloading framework for automobile services -- the hope is to make computer vision tasks seemless in vehicular applications

Privacy Respecting Cameras work at VLCS/Mobicom Workshop

Our paper "Do Not Share! Invisible Light Beacons for Signalling Preferences to Privacy-Respecting Cameras" was presented at the 1st ACM Workshop on Visible Light Communications and Systems. This was part of the MobiCom conference 2014.

Successfully defended Ph.D. June 2014!

Dissertation title: Design, Modeling and Analysis of Visual MIMO Communication

Indoor Localization Competition

I will lead a team that will present a localization system that uses our hybrid radio-optical beaconing tags and receiver

ICCP 2014

Our Phase-Messaging array -- Time-of-Flight Camera based communication paper has been accepted to International Conference on Computational Photography


PerCom 2014

Our screen-camera communication capacity paper has been accepted to PerCom


Mobisys 2014 Ph.D. forum

Have been invited to serve as a TPC member of Mobisys'2014 Ph.D. forum


Talk at Stevens

On 6th Nov 2013 i gave a talk at Stevens Institute of Technology ECE Seminar on Camera Based Communication Using Visual MIMO


Internship at Qualcomm

On 16th Aug 2013 I completed my summer internship at QualComm, NJ where i worked with Dr. Aleksandar Jovicic on a VLC application.


Talk at Qualcomm

Gave a talk on Camera based Optical Wireless at QualComm New Jersey Research Center in July 2013


Mobisys'2013 Demo

I presented a demo of our "Augmented Reality Glasses" (see BiFocus: Radio-Optical Beaconing for Augmented Reality Search Application ) at Mobile Systems and Applications Conference (Mobisys) at Taipei, Taiwan