Akshay Mendhakar

About Me

An experienced researcher in speech and language technology with a strong background in computational linguistics, machine learning, and cognitive psychology. I have a strong emphasis on developing human-centred AI solutions that enhance human communication, diagnostics, and education. Skilled in developing and optimising speech-based AI systems for healthcare and education, with a focus on human-centred AI. Utilising eye-tracking and AI technologies to study social interaction and communication patterns in real-time, I develop human-centred, speech- and language-based AI tools designed to enhance voice-driven interfaces for both clinical diagnostics and educational applications. Experienced in Natural Language Processing (NLP), Large Language Models (LLMs), and deep learning to model cognitive and linguistic processes in clinical and multilingual settings. Proficient in managing large datasets, designing computational models, and using speech biomarkers for speech evaluation and diagnostics. Strong track record of securing research funding and leading collaborative projects with academic, government, and industry partners. I have a proven track record in teaching and supervising undergraduate and postgraduate students to develop and evaluate AI-driven speech and language technologies. Published in high-impact journals and conferences, my work demonstrates a balance between fundamental research and applied solutions. Prepared to contribute to the research agenda of the Centre for Language Studies, I focus on voice-driven interfaces, multimedia understanding, and speech-based diagnostics. Proficient in Python, R, and MATLAB, and experienced in deploying deep learning frameworks for real-world speech and language applications. Committed to fostering an inclusive and interdisciplinary research environment

  • Employed ALL INDIA INSTITUTE OF SPEECH & HEARING
  • AddressMysore, Karnataka,India.
  • E-mailamendhakar@gmail.com

  • Core Interests•SPEECH TECHNOLOGY
  • •LANGUAGE SCIENCE
  • • ARTIFICIAL INTELLIGENCE
  • • EYE TRACKING
  • • MACHINE LEARNING
  • •ASSISTIVE TECHNOLOGIES
  • • HUMAN-COMPUTERINTERACTION
  • • MULTILINGUALISM
  • • DEEP LEARNING FRAMEWORKS

Degrees Held

SLP Internships

SLP/Audiologist

Jul 2014 – Nov 2014 AIISH

Advisor: Dr. M.Pushpavathi

SLP

Dec 2014- Feb 2015 Civil hospital Dharwad

Advisor: Dr. Ashok. M ( Senior specialist Surgeon)

Audiologist

Feb 2015- Apr 2015 Karnataka Institute of Medical Science (KIMS)

Advisor: Dr. P. Ravindra Gadag(HOD of ENT)

SLP

Apr 2015 – May 2015 Civil Hospital, Haveri

Dr.Mallikarjun (Senior specialist surgeon)

Internships

Head R&D team

Oct 2018- Dec 2018 VoiceTech, Mumbai.

Advisor:Mr. Deven Vartak.

Maintainance Engineer

Oct 2018- Dec 2018 Voice and speech systems, Bangalore.

Advisor: Guide: Dr. Ananthapadmanabhan T V.

Positions Held

Responsive Design

Visiting Researcher

Project Title- Literary text comprehenssion and perception

July 2020 - Present

Responsive Design

Research Officer

Project Title- Development of Speech Enabled Communication Tool for Clients with Speech Impairment in Kannada

Oct 2019- July 2020

Photography

Speech-language Pathologist – Grade II

Department of Clinical Services,All India Institute of Speech and Hearing

Jan 2019- Sep 2019

Creativity

Product Development Specialist

Exsof Technologies- Data mining and machine interaction related products

Oct 2018- Dec 2018

Advetising

Speech-language Pathologist – Grade I

DHLS, Mysore, All India Institute of Speech and Hearing

Oct 2017 – Oct 2018

Responsive Design

Speech-language Pathologist- Grade II

Project Title- Survey of Communication disorders by trained ASHA Workers in the districts of Mysuru, Mandya and Chamarajanagara .

Jul 2017 – Oct 2017

  • [1] Deep band modulation and noise effects: Perception of phrases in adults : Shetty, Hemanth Narayana, and Akshay Mendhakar. "Deep band modulation and noise effects: Perception of phrases in adults." Hearing, Balance and Communication 13.3 (2015): 111-117.http://dx.doi.org/10.3109/21695717.2015.1058609

  • [2] Development of phrase recognition test in Kannada language: Shetty, H. N., & Mendhakar, A. (2015). Development of phrase recognition test in Kannada language. Journal of Indian Speech Language & Hearing Association, 29(2), 21.Available from:http://www.jisha.org/text.asp?2015/29/2/21/185976

  • [3] Mendhakar, Akshay. M., Sangeetha. M., & Rakshith. S. (2016). Application of Automatic Speech Recognition in Communication Sciences and Disorders. Journal of Acoustical Society of India, 43(2), 108-115. Retrieved December 06, 2016, fromhttp://www.acousticsindia.org/JASI2016Vol-43(1-4).pdf

  • [4] Akshay Mendhakar, Devi N, Renuka Chandrakanth, Ajish K Abraham.(2018).Need for Robotic Resources in Speech and Hearing - An Indian Study. International Journal of Advances in Science Engineering and Technology, ISSN(p): 2321 –8991, ISSN(e): 2321 –9009 Vol-6, Iss-1, Spl. Issue-1 Feb.- 2018,http://www.iraj.in/journal/journal_file/journal_pdf/6-445-152327495635-38.pdf

  • [5] B P, Abhishek., &Mendhakar, A. (2018). Utility of Mobile Apps in Speech and Language Therapy: A Survey Study. International Journal of Biomedical Engineering, 4(1), 1- 3.http://materials.journalspub.info/index.php?journal=JBMBE&page=article&op=view&path%5B%5D=363

  • [6] Mendhakar, A., & Priyadarshi, B. (2018). A relook into language tests in India: an explorative study. A Relook into Language Tests in India: An Explorative Study, 6(2), 1-20. Retrieved fromhttp://jclad.scienceres.com/current_issue/Vol_6_issue_2_FULL_ISSUE.pdf

  • [7] Mendhakar, A. M., & Mahesh, S. (2018). Automatic annotation of reading using Speech Recognition: A pilot study. Research & Reviews: A Journal of Bioinformatics, 5(2),25- 29.http://techjournals.stmjournals.in/index.php/RRJoBI/article/view/216

  • [8] Mendhakar, A. M., Baby, Eliza., && Renuka, C. (2018, December). Can we Accelerate Autism Discoveries through Eye Tracking and Machine Learning Technology? In Electrical, Electronics, Communication, Computer, and Optimization Techniques (ICEECCOT), 2018 International Conference on (pp. 1549-1552).IEEE.

  • [9] Mendhakar, A. M., & Mahesh, S. (2018, December). Automatic analysis of stuttered speech using dynamic time warping. In Electrical, Electronics, Communication, Computer, and Optimization Techniques (ICEECCOT), 2018 International Conference on (pp. 1745-1748).IEEE.

  • [10]Mendhakar, A. M., Sreedevi, N., Arunraj, K., & Shanbal, J. C. (2019). Infant Screening System Based on Cry Analysis. International Annals of Science, 6(1), 1- 7.https://journals.aijr.in/index.php/ias/article/download/757/180

  • [11]Hemanth Narayan Shetty &Akshay Mendhakar (2019) Effect of rate altered perception of deep band modulated phrase in noise from normal hearing younger and older adult groups, Hearing, Balance and Communication, 17:2, 154-164, DOI: 10.1080/21695717.2019.1591006
  • [1] Mendhakar, A. M., & Mahesh, S. (2018, December). Automatic analysis of stuttered speech using dynamic time warping. In Electrical, Electronics, Communication, Computer, and Optimization Techniques (ICEECCOT), 2018 International Conference on (pp. 1745-1748).IEEE.

  • [2] Akshay, M., Sangeetha, M., & Fathima, M. (2015). Automatic annotation of read speech using speech recognition: A pilot study, Proceedings of National Symposium on Acoustics- Book of Abstracts (NSA 7 to 9th October 2015, Goa), 79, CSIR-NIO: Belgaum, India.http://www.aiishmysore.in/en/pdf/Clinical_services_research_papers_published_in_conference.pdf

  • [3] Sreedevi, N., Shanbal, J.C., Arunraj, K., Neethu, T., &Akshay, M. (2016). An Innovative Computerized Screening Tool for Infant cry Analysis in the Detection of Communication Disorders- A Pilot Study. Paper presented at India International Science Festival, 7th-11th December, CSIR-NPL,2016

  • [4] Akshay M Mendhakar, Eliza Baby & Abhishek B P (2018). Aphasia Confidence Index (ACI) - An Objective measure of aphasic. Annual Convention of Indian Speech Language and Hearing Association (ISHA Con-2018), Mysuru (India), January 2018.

  • [5] Akshay Mendhakar., Jayashree C Shanbal ., (2018). Can we accelerate the identification of dyslexia using machine learning algorithms? Paper presented at India International Science Festival, 5th-8th October, IGPYSC,Lucknow,2018

  • [6] Jayashree C Shanbal, Akshay Mendhakar , Ashwini B N , Neha Yadav., (2018). Visual processing in children with Dyslexia: Through Eye-gaze technology .Paper presented at India International Science Festival, 5th-8th October, IGP-YSC,Lucknow,2018

  • [7] Mendhakar, A. M., Sneha, K., Devi, N., & Renuka, C. (2018, December). Adaptive machine learning -An innovative hearing aid technology. In Electrical, Electronics, Communication, Computer, and Optimization Techniques (ICEECCOT), 2018 International Conference on (pp. 1319-1322).IEEE.

  • [8] Mendhakar, A. M., Baby, Eliza., && Renuka, C. (2018, December). Can we Accelerate Autism Discoveries Through Eye Tracking and Machine Learning Technology? In Electrical, Electronics, Communication, Computer, and Optimization Techniques (ICEECCOT), 2018 International Conference on (pp. 1549-1552).IEEE.

  • [9] Mendhakar, A. M., & Mahesh, S. (2018, December). Automatic analysis of stuttered speech using dynamic time warping. In Electrical, Electronics, Communication, Computer, and Optimization Techniques (ICEECCOT), 2018 International Conference on (pp. 1745-1748).IEEE.
  • Get in Touch

    Mysuru, India
    amendhakar@gmail.com
    Expected response - 1 day

    Contact Form