I build technologies that benefit from advancements in Human-Computer Interaction, Natural Language Processing, Crowdsourcing, and Machine Learning to solve real-world problems such as enabling access to information via sign language for people who are deaf and supporting independent living for people with visual impairment.

My research investigates models and novel interfaces to address unique challenges, faced due to health or disability, with an emphasis on best experimental user evaluation methodologies to assess impact.

Recent Publications

Google Scholar

2017

  1. People with Visual Impairment Training Personal Object Recognizers: Feasibility and Challenges.
    Kacorri, H., Kitani, K.M., Bigham, J.P., and Asakawa, C. CHI 2017.
    Best Paper Honorable Mention
  1. Regression Analysis of Demographic and Technology Experience Factors Influencing Acceptance of Sign Language Animation.
    Kacorri, H., Huenerfauth, M., Ebling, S., Patel, K., Menzies, K., and Willard, M. TACCESS 2017 (in press).

2016

  1. Supporting Orientation of People with Visual Impairment: Analysis of Large Scale Usage Data.
    Kacorri, H., Mascetti, S., Gerino, A., Ahmetovic, D., Takagi, H., and Asakawa, C. ASSETS 2016.
    Best Paper Finalist
  1. Continuous Profile Models in ASL Syntactic Facial Expression Synthesis.
    Kacorri, H. and Huenerfauth, M. ACL 2016.
  1. Selecting Exemplar Recordings of American Sign Language Non-Manual Expressions for Animation Synthesis Based on Manual Sign Timing.
    Kacorri, H. and Huenerfauth, M. INTERSPEECH - Speech and Language Processing for Assistive Technologies (SLPAT 2016)
  1. Centroid-Based Exemplar Selection of ASL Non-Manual Expressions using Multidimensional Dynamic Time Warping and MPEG4 Features.
    Kacorri, H., Syed, A.R., Huenerfauth, M., Neidle, C. LREC - Representation and Processing of Sign Languages 2016.
  1. Eyetracking Metrics Related to Subjective Assessments of ASL Animations
    Huenerfauth, M. and Kacorri, H. Journal on Technology & Persons with Disabilities, Volume 4, 2016.
  1. Data-Driven Synthesis and Evaluation of Syntactic Facial Expressions in American Sign Language Animation.
    Kacorri, H. Doctoral Dissertation, Computer Science, Graduate Center, CUNY.

2015

  1. Best Practices for Conducting Evaluations of Sign Language Animation.
    Huenerfauth, M. and Kacorri, H. Journal on Technology & Persons with Disabilities, Volume 3, 2015.
  2. Demographic and Experiential Factors Influencing Acceptance of Sign Language Animation by Deaf Users.
    Kacorri, H., Huenerfauth, M., Ebling, S., Patel, K., and Willar, M. ASSETS 2015.
  3. CapCap: An Output-Agreement Game for Video Captioning.
    Kacorri, H., Shinkawa, K., and Saito, S. INTERSPEECH 2015.
  4. Evaluating a Dynamic Time Warping Based Scoring Algorithm for Facial Expressions in ASL Animations.
    Kacorri, H. and Huenerfauth, M. INTERSPEECH - SLPAT 2015.
  5. Comparison of Finite-Repertoire and Data-Driven Facial Expressions for Sign Language Avatars.
    Kacorri, H. and Huenerfauth, M. UAHCI 2015.
  6. Augmenting EMBR Virtual Human Animation System with MPEG-4 Controls for Producing ASL Facial Expressions.
    Huenerfauth, M. and Kacorri, H. SLTAT 2015.