Dr. P. Vijayalakshmi, Ph.D

Professor
Department of Electronics and Communications Engineering
Email: vijayalakshmip@ssn.edu.in

 

Dr. P. Vijayalakshmi, (IEEE M '08-15, SM '16 onwards) (Member IEEE Signal Processing Society) (Fellow IETE) Professor in the Department of Electronics and Communication Engineering, has 21 years of teaching and research experience, including 4 years of exclusive research experience in the field of Speech signal processing and Speech pathology.

Education

She received B.E (ECE) degree first class with distinction from Bharathidasan University. She Completed M.E (Communication systems) from Regional Engineering College, Trichy (currently NIT, Trichy) and earned her Ph.D. degree from IIT Madras and worked as a doctoral trainee for a year at INRS - EMT, Montreal, Canada.

During her Ph.D she developed various speech recognition systems and a novel approach for detection and assessment of disordered speech such as hypernasal and dysarthric speech apart from analyzing normal speech. During her Ph.D she had an opportunity to work with Prof. Douglas O'Shaughnessy at National Institute of Scientific Research (INRS), Montreal, Canada as a doctoral trainee for a period of one year in a project titled "Speech recognition and analysis".

Research

She has published over 70 research publications in refereed international journals and in proceedings of international conferences. As a principal investigator she is currently involved in DST-TIDE funded project. As a co-investigator she is currently involved in projects funded by DeitY, MCIT, New Delhi, and Tamil Virtual Academy, a Government of Tamil Nadu organization, and as a principal investigator completed one AICTE funded project and two projects funded by SSN Trust. She is a recognized supervisor of Anna University and currently guiding three fulltime and one part-time Ph.D scholars in the field of speech technology.

Her areas of research include speech enhancement, voice conversion, polyglot speech synthesis, speech recognition, statistical parametric speech synthesis and speech technology for healthcare applications.

Publications

Book chapters

  1. P. Vijayalakshmi, T. Nagarajan, "Assessment and intelligibility modification for dysarthric speakers", Voice Technologies for Reconstruction and Enhancement, De Gruyter Series in Speech Technology and Text Mining in Medicine and Healthcare (submitted).
  2. P. Vijayalakshmi, T. A. Mariya Celin, T. Nagarajan, "Selective pole modification-based technique for the analysis and detection of hypernasality", Signal and Acoustic Modeling for Speech and Communication Disorders, De Gruyter Series in Speech Technology and Text Mining in Medicine and Healthcare (submitted).

Journal Publications

  1. S. Johanan Joysingh, P. Vijayalakshmi , T. Nagarajan, “Quartered Spectral Envelope and 1D-CNN-Based Classification of Normally Phonated and Whispered Speech”, Circuits Systems and Signal Processing, DOI: 10.1007/s00034-022-02263-5, 2022
  2. Johanan Joysingh S., P. Vijayalakshmi , and T. Nagarajan, “Chirp Group Delay based Onset Detection in Instruments with Fast Attack” Circuit systems and signal processing, pp. 1- 24, Sep. 2022.
  3. Maria Celin T. A., P. Vijayalakshmi, and T. Nagarajan, “Data Augmentation Techniques for Transfer Learning-based continuous dysarthric speech recognition”, Circuits, systems and signal processing, pp. 1-22, Aug. 2022
  4. Nanmalar, P. Vijayalakshmi, T. Nagarajan, “Literary and colloquial Tamil dialect identification”, Circuits, systems and signal processing, 41, . pp. 4004-4027, March 2022, DOI: 10.1007/s00034-022-01971-2.
  5. Mrinalini K, P. Vijayalakshmi and T. Nagarajan, “SBSim: A sentence BERT Similarity based evaluation metric for Indian language neural machine translation systems”, IEEE/ACM Transactions on Audio, Speech and Language Processing, Vol. 30, pp. 1396-1406, March 2022
  6. G. Anushiya Rachel, Sreenidhi, P. Vijayalakshmi and T. Nagarajan, “Incorporation of happiness in neutral speech by modifying time-domain parameters of emotive keywords”, Circuits, systems and signal processing, 2022, Vol. 41, pp. 2061-2087, Mar. 2022
  7. P. Vijayalakshmi, T. Nagarajan, R. Jayapriya, S. Brathindara, K. Krithika, N. Nikhilesh, N. Naren Raju, S. Johanan Joysingh, V. Aiswarya, K. Mrinalini, “Development of a low-resource wearable continuous gesture-to-speech conversion system”, Disability and Rehabilitation: Assistive Technology. pp.1-13, Jan. 2022.
  8. M. Dhanalakshmi, T. Nagarajan, P. Vijayalakshmi, 'Significant sensors and parameters in assessment of dysarthric speech', Sensor Review, Vol. 41 No. 3, pp. 271-286, July 2021.
  9. M.P. Actlin Jeeva, T. Nagarajan, P. Vijayalakshmi, “Adaptive Multi-Band Filter Structure-based Far-End Speech Enhancement”, IET Signal Processing, Vol. 14, Issue 5 pp. 288-299, Jun. 2020.
  10. Lavanya, T. Nagarajan and P. Vijayalakshmi, “Multi-level single-channel speech enhancement using a unified framework for estimating magnitude and phase spectra”-IEEE/ACM Transactions on Audio, Speech and Language Processing, Vol. 28, pp. 1315-1327, Apr.2020
  11. T. A. Mariya Celin, G. Anushiya Rachel, T. Nagarajan, P. Vijayalakshmi, "Data Augmentation using virtual microphone array synthesis and multi-resolution feature extraction for isolated word dysarthric speech recognition," in IEEE Journal of Selected Topics in Signal Processing, DOI: 10.1109/JSTSP.2020.2972161, February 2020.
  12. T. A. Mariya Celin, G. Anushiya Rachel, T. Nagarajan, P. Vijayalakshmi, “A Weighted Speaker-Specific Confusion Transducer Based Augmentative and Alternative Speech Communication Aid for Dysarthric Speakers”, IEEE Transactions on Neural Systems and Rehabilitation Engineering, Vol. 27, Issue 2, pp. 187-197, Feb 2019.
  13. K. Mrinalini, T. Nagarajan, P. Vijayalakshmi, “Pause-Based Phrase Extraction and Effective OOV Handling for Low-Resource Machine Translation Systems”, ACM Transactions on Asian and Low Resource Language Information Processing, Vol. 18, Issue 2, pp. 12:1-12:22, Feb 2019.
  14. G. Anushiya Rachel, N. Sripriya, P. Vijayalakshmi, T. Nagarajan, “Significance of Differenced EGG Signal as a Spectrum in Phase-Difference Computation for the Estimation of Glottal Closure Instants”, Circuits, Systems, and Signal Processing, Vol. 37, Issue 5, pp. 2074-2097, May 2018.
  15. , P. Vijayalakshmi, B. Ramani, M. P. Actlin Jeeva, T. Nagarajan, “A Multilingual to Polyglot Speech Synthesizer for Indian Languages Using a Voice-Converted Polyglot Speech Corpus", Circuits, Systems and Signal Processing, Vol. 37, Issue 5, pp. 2142-2163, May 2018.
  16. M. Dhanalakshmi, T. A. Mariya Celin, T. Nagarajan, P. Vijayalakshmi, “Speech-Input Speech-Output Communication for Dysarthric Speakers Using HMM-Based Speech Recognition and Adaptive Synthesis System”, Circuits, Systems and Signal Processing, Vol. 37, Issue 2, pp. 674-703, Feb. 2018.
  17. A. Ahyisha Shabana (M.E CS- 2017), T. Lavanya (Research scholar), P. Vijayalakshmi (Faculty ECE), “Speaker diarization for conversational speech using Bayesian information criterion”, International journal of pure and applied mathematics, Vol. 119, No.7, 2018, pp. 1109-1114.
  18. G. Anushiya Rachel, P. Vijayalakshmi, T. Nagarajan, "Estimation of Glottal Closure Instants from Degraded Speech using a Phase-Difference-Based Algorithm", Computer, Speech, and Language, Vol. 46, pp. 136-153, Nov 2017.
  19. V. Sherlin Solomi, P. Vijayalakshmi, T. Nagarajan, "Exploiting Acoustic Similarities Between Tamil and Indian English in the Development of an HMM-based Bilingual Synthesizer", IET Signal Processing, Vol. 11, Issue 3, pp. 332-340, May 2017.
  20. M. P. Actlin Jeeva, T. Nagarajan, P. Vijayalakshmi, "DCT derived spectrum-based speech enhancement algorithm using temporal-domain multiband filtering", IET Signal Processing, pp. 965-980, Vol. 10, Issue 8, Oct. 2016.
  21. B. Ramani, M. P. Actlin Jeeva, P. Vijayalakshmi, T. Nagarajan, "A Multi-level GMM-Based Cross-Lingual Voice Conversion Using Language-Specific Mixture Weights for Polyglot Synthesis", Circuits, Systems and Signal Processing, Vol. 35, pp. 1283-1311, Apr. 2016.
  22. G. Anushiya Rachel, V. Sherlin Solomi, K. Naveenkumar, P. Vijayalakshmi, and T. Nagarajan, "A small footprint context-independent HMM-based speech synthesizer for Tamil", International Journal of Speech Technology, Vol. 18, Issue 3, pp. 405-418, Sep. 2015.
  23. P. Vijayalakshmi, T. Nagarajan, and M. Preethi, "Improving speech intelligibility in cochlear implants using acoustic models", WSEAS Transactions on Signal Processing, Issue 4, Vol.7, pp. 103-116, Oct. 2011.
  24. P. Vijayalakshmi, T. Nagarajan and M. R. Reddy - "Assessment of articulatory and velopharyngeal sub-systems of dysarthric speech" - Intl. Jl of BSCHS, special issue on Biosensors: Data acquisition, Processing and Control, Vol. 14, No. 2, pp 87 -94, June 2009.
  25. P. Vijayalakshmi, M. R. Reddy and Douglas O'Shaughnessy - "Acoustic Analysis and detection of hypernasality using group delay function" - IEEE Trans. On Biomedical Engineering, Vol.54, No. 4, pp 621-629, April 2007.
  26. M. P. Actlin Jeeva, T. Nagarajan, P. Vijayalakshmi, "Noise-Adaptive Speech-Specific Dynamic Filter Structure-Based Subbands for Simultaneous Improvement of Quality and Intelligibility of Far-end Speech", IEEE Transactions on Audio, Speech and Language Processing (submitted).
  27. G. Anushiya Rachel, N. Sripriya, P. Vijayalakshmi, T. Nagarajan, "Significance of Differenced EGG Signal as a Spectrum in Phase-Difference Computation for the Estimation of Glottal Closure Instants", Circuits, Systems, and Signal Processing (submitted).
  28. G. Anushiya Rachel, P. Vijayalakshmi, T. Nagarajan, "Analysis of Algorithms to Estimate Glottal Closure Instants from Speech Signals", IET Signal Processing (submitted).
  29. B. Ramani, M. P. Actlin Jeeva, T. Nagarajan, P. Vijayalakshmi, "A Multilingual to Polyglot speech synthesizer for Indian languages using multi-level GMM-based cross-lingual voice conversion", Circuits, Systems and Signal Processing (second revision).

Conference Publications

  1. Johanan S, Vijayalakshmi P, Nagarajan T, "Development of Large Annotated Music Datasets using HMM based Forced Viterbi Alignment", IEEE TENCON'19.
  2. Nanmalar M, Vijayalakshmi P, Nagarajan T, "Literary and Colloquial Dialect Identification for Tamil using Acoustic Features", IEEE TENCON'19.
  3. Lavanya T, Mrinalini K, Vijayalakshmi P, Nagarajan T, "Histogram Matching based Optimized Energy Redistribution for Near End Listening Enhancement", IEEE TENCON'19.
  4. Gali Kavya Shree Sai , Ganapathi Ramanathan , Manne Muddu Reshma Priya and P. Vijayalakshmi, “Speech enabled virtual assistant using image captioning”, National Conference on Information and Communication Techologies, April 2019.
  5. Jayapriya R, Johanan Joysingh S, Vijayalakshmi P, “Development of MEMS sensor-based double handed gesture-to-speech conversion system”, IEEE International Conference on Vision Towards Emerging Trends in Communication Networking 2019 (VITECoN’19), March 2019.
  6. G. Anushiya Rachel, Vijayalakshmi P., T. Nagarajan, “Significance of Radius in the Phase-Difference-Based Approach to the Estimation of Glottal Closure Instants”, 2nd IEEE Int. Conf. on Computer, Communication and Signal Processing, Chennai, India, Feb. 2018, pp. 187-191.
  7. Nirmal Kumar, K. Mrinalini, Vijayalakshmi P. , “Improving the performance of low-resource SMT using neural-inspired sentence generator”, 2nd IEEE Int. Conf. on Computer, Communication and Signal Processing, Chennai, India, Feb. 2018, pp. 192-195.
  8. V. Aiswarya, N. Naren Raju, S. Johanan Joysingh, T. Nagarajan, Vijayalakshmi P., "HMM based Sign Language to Speech Conversion System in Tamil", Int. Conf. on Biosignals, Images and Instrumentation (ICBSII), Mar. 2018, pp. 206-212.
  9. M. Dhanalakshmi, T. A. Mariya Celin, T. Nagarajan, Vijayalakshmi P., “Electromagnetic Articulograph Sensor-to-Sound Unit Mapping-Based Intelligibility Assessment of Dysarthric Speech”, IEEE TENCON, Malaysia, Nov. 2017, pp. 1784-1789.
  10. S. Johanan Joysingh, M. Nanmalar, G. Anushiya Rachel, V. Sherlin Solomi, Vijayalakshmi P., T. Nagarajan, “Development of a Speech-Enabled Interactive Enquiry System in Tamil for Agriculture", Tamil Internet Conference, Toronto, Canada, Aug. 2017.
  11. K. Mrinalini, G. Anushiya Rachel, T. Nagarajan, Vijayalakshmi P., “Sentence-Medial Pause Identification for Tamil Synthesis System”, Tamil Internet Conference, Toronto, Canada, Aug. 2017. (best paper award)
  12. D. S. K. Lena, Vijayalakshmi P. , “Speech enhancement in vehicular environments as a front end for robust speech recognizer”, IEEE International Conference on Intelligent Computing and Control Systems (ICICCS 2017), Jun. 2017.
  13. Ahyisha shabana A. , Lavanya T. , Vijayalakshmi P. , “Speaker diarization for conversational speech using Bayesian information criterion”, 2nd International conference on recent trends in engineering and technology, May 2017.
  14. T. A. Mariya Celin, T. Nagarajan, Vijayalakshmi P., "Dysarthric Speech Corpus in Tamil for Rehabilitation Research", IEEE TENCON, Singapore, pp. 2612-2615, Nov. 2016.
  15. Mrinalini K., Sangavi G., Vijayalakshmi P., "Performance Improvement of Machine Translation System using LID and Post-editing", IEEE TENCON 2016, Singapore, pp. 2136-2139, Nov. 2016.
  16. Sherlin Solomi V., Anushiya Rachel G., Vijayalakshmi P., Nagarajan T., "Phone Mapping-based Mixed language synthesizer for Tamil and Indian English", Tamil Internet Conference, Sept-9-11, 2016.
  17. Mrinalini K., Vijayalakshmi P., "LID based Post-Editing for Tamil Machine Translation system", Tamil Internet Conference, Sept-9-11, 2016.
  18. Aarthi M., Vijayalakshmi P., "Sign Language to Speech Conversion", Fifth International Conference on Recent Trends in Information Technology (ICRTIT), April 2016.
  19. Actlin Jeeva M. P, Nagarajan T, Vijayalakshmi P. "Formant-filters based multi-band speech enhancement algorithm for intelligibility improvement", National Conference on Communications (NCC-2016), IIT Guwahati, March 2016, pp. 243-248.
  20. M. P. Actlin Jeeva, T. Nagarajan, Vijayalakshmi P., "Temporal Domain Filtering Approach for Multiband Speech Enhancement", Int. Conf. on Microwave, Optical and Communication Engineering, IIT Bhubaneswar, December 2015.
  21. G. Anushiya Rachel, P. Vijayalakshmi and T. Nagarajan, "Estimation of Glottal Closure Instants from Telephone Speech using a Group Delay-Based Approach that Considers Speech Signal as a Spectrum", INTERSPEECH 2015, Germany. pp.1181-1185.
  22. M. Dhanalakshmi and P. Vijayalakshmi, "Intelligibility modification on Dysarthric speech using HMM-based adaptive synthesis system" in the Proceedings of IEEE sponsored International Conference on Biomedical Engineering (ICoBE 2015) at Penang, Malaysia, March 2015, pp. 49-53.
  23. B. Ramani, M.P. Actlin Jeeva, P. Vijayalakshmi, T. Nagarajan, "Cross-Lingual Voice Conversion-Based Polyglot Speech Synthesizer for Indian Languages", INTERSPEECH Singapore, 2014, pp. 775-779.
  24. V. SherlinSolomi, M.S. Saranya, G. Anushiya Rachel, P. Vijayalakshmi, T. Nagarajan, "Performance Comparison of KLD and PoG Metrics for Finding the Acoustic Similarity Between Phonemes for the Development of a Polyglot Synthesizer", IEEE TENCON, Bangkok, Thailand, 2014, pp. 1-4.
  25. G. Anushiya Rachel, S. Sreenidhi, P. Vijayalakshmi, T. Nagarajan, "Incorporation of Happiness into Neutral Speech by Modifying Emotive-Keywords", IEEE TENCON, Bangkok, Thailand, 2014, pp. 1-6.
  26. Lilly Christina, P. Vijayalakshmi, and T. Nagarajan, "Cross-lingual speaker adaptation in HMM-based speech synthesis" NCC 2014, IIT Kanpur, 2014, pp. 1-5.
  27. V. Sherlin Solomi, S. Lilly Christina, Anushiya Rachel Gladston, Ramani B, P. Vijayalakshmi, T. Nagarajan, "Analysis on Acoustic Similarities between Tamil and English Phonemes using Product of Likelihood-Gaussians for an HMM-Based Mixed-Language Synthesizer", in proc. of Intl. oriental COCOSDA 2013 conference, KIIT, Gurgaon, Nov. 25-27,2013.
  28. Anushiya Rachel Gladston, S. Lilly Christina, V. Sherlin Solomi, Ramani B, P. Vijayalakshmi, T. Nagarajan, "Development and Analysis of Various Phone-Sized Unit-Based Speech Synthesizers", in proc. of Intl. oriental COCOSDA 2013 conference, KIIT, Gurgaon, Nov. 25-27,2013.
  29. Ramani. B, Actlin Jeeva M. P., P. Vijayalakshmi, T. Nagarajan, "Voice conversion based multilingual to polyglot speech synthesizer for Indian languages", in Proc. of IEEE TENCON 2013, China, pp. 1-4.
  30. Ramani B, S Lilly Christina, G Anushiya Rachel, Sherlin Solomi V, Mahesh Kumar Nandwana, Anusha Prakash, Aswin Shanmugam, Raghava Krishnan, S Kishore Prahalad (IIITH), K Samudravijaya (TIFR), P Vijayalakshmi, T Nagarajan and Hema Murthy(IITM), "A Common Attribute based Unified HTS framework for Speech Synthesis in Indian Languages", ISCA SSW8, 2013.
  31. M. P. Actlin Jeeva, B. Ramani, P. Vijayalakshmi, "Performance Evaluation and Comparison of Multilingual Speech Synthesizers for Indian Languages" in Intl. Conf. on Recent Trends in Information Technology (ICRTIT'13), 2013, pp. 590-595.
  32. M. Anbu Swarna Priyanka, V. Sherlin Solomi, P. Vijayalakshmi, T. Nagarajan, "Multiresolution Feature Extraction (MRFE) based speech recognition system" in Intl. Conf. on Recent Trends in Information Technology (ICRTIT'13), 2013, pp.152-156.
  33. S. Magdalene Mahiba, S. Lilly Christina, P. Vijayalakshmi, T. Nagarajan, "Analysis of Cross-gender Adaptation using MAP and MLLR in Speech Recognition Systems", in Proc. of Intl. Conf. on Recent Trends in Information Technology (ICRTIT'13), 2013, pp. 387-392
  34. G. Anushiya Rachel, S. Johanan Joy Singh, P. Vijayalakshmi, "LabView and digital signal processor implementation of channel vocoder based model of a cochlear implant", in Intl. Conf. on Recent Trends in Information Technology (ICRTIT'13), 2013, pp. 142-146.
  35. Ramani B, Sherlin S, Anushiya Rachel Gladston, Lilly Christina S, P Vijayalakshmi, Nagarajan Thangavelu, Hema A Murthy (IITM), "Development and evaluation of unit selection and HMM-based speech synthesis for Tamil" - in Proceedings of NCC, 2013, pp. 1-5.
  36. M. Saranya, P. Vijayalakshmi and Nagarajan Thangavelu, "Improving the Intelligibility of Dysarthric Speech by Modifying System Parameters, Retaining Speaker's Identity", Proc. of the Second International Conference on Recent Trends in Information Technology, Apr. 19 - 21st, 2012, pp. 60-65.
  37. M. Gandhimathy, and P. Vijayalakshmi, "Linear Prediction Residual Error Based Assessment of Dysarthric Speech", Proc. of International Conference on Computing and Control Engineering, Apr. 12 - 13th, 2012, ISBN-978-1-4675-2248-9
  38. S. Lilly Christina, P. Vijayalakshmi, and Nagarajan Thangavelu, "HMM Based Speech Recognition System for the Dysarthric Speech Evaluation of Articulatory Subsystem", Proc. of the Second International Conference on Recent Tends in Information Technology, Apr. 19 - 21st, 2012, pp. 54-59.
  39. Anushiya Rachel Gladston, P. Vijayalakshmi, and Nagarajan Thangavelu, "Improving Speech Intelligibility in Cochlear Implants Using Vocoder-Centric Acoustic Models", Proc. of the Second International Conference on Recent Trends in Information Technology, Apr. 19 - 21st, 2012, pp. 66 - 71.
  40. Preethi Mahadevan, Nagarajan T, Pavithra B, S Shri Ranjani, Vijayalakshmi P., "Design of a lab model of a Digital Speech Processor for cochlear implant", TENCON 2011, Indonesia, pp. 307 - 311.
  41. Preethi M, Pavithra B, Shriranjani S, Vijayalakshmi P., and Nagarajan T, "Cochlear Implant model using Mel-frequency cepstral coefficients", Intl. conference on Implantable Auditory Prosthesis (CIAP) - 2011, pp. 183.
  42. P. Vijayalakshmi, Anu Abraham, B. Bharathi and T. Nagarajan - "Reducing the complexity of a triphone-based speech recognition system based on degree of coarticulation" - IEEE Techsym 2011 pp. 175 - 179.
  43. B.Bharathi, P. Vijayalakshmi and T. Nagarajan - "Speaker identification using utterances correspond to speaker-specific-text" - IEEE Techsym 2011 pp. 171 - 174.
  44. Anu Abraham, P. Vijayalakshmi and T. Nagarajan - "Pole-focused linear prediction-based spectrogram for coarticulation analysis" - IEEE Techsym 2010, pp. 94 - 97.
  45. V. Surabhi, P. Vijayalakshmi, T. Steffina Lily and Ra. V. Jayanthan - "Assessment of laryngeal dysfunctions of dysarthric speakers" - IEEE EMBC - 2009, Minnesota, Sep. 2009, pp. 2908 - 2911.
  46. Ra. V. Jayanthan, P. Vijayalakshmi and P. Mukesh Kumar - "Auditory model based acoustic CI simulations for patients with profound hearing loss" - International Conference on Implantable Auditory Prosthesis (CIAP) - 2009, pp. 228.
  47. P. Vijayalakshmi, P. Mukesh Kumar, Ra. V. Jayanthan and T. Nagarajan - "Cochlear implant models based on critical band filters" - IEEE TENCON 2009, Singapore, Nov. 23 - 26, Nov. 2009, pp. 1 - 5.
  48. P. Vijayalakshmi, T. Nagarajan and Ra. V. Jayanthan - "Selective pole modification-based technique for the analysis and detection of hypernasality" - IEEE TENCON 2009, Nov. 23 - 26, Nov. 2009, pp. 1 - 5
  49. N. Sripriya, P. Vijayalakshmi, C. Arun Kumar and T. Nagarajan - "Estimation of instants of significant excitation from speech signal using temporal phase periodicity" - IEEE TENCON 2009, Nov. 23 - 26, Nov. 2009 pp. 1 - 4
  50. T. Nagarajan and P. Vijayalakshmi - "Discriminative optimization of HMM Topology using Product of Gaussians", - in Proceedings of NCC, IIT, Bombay, Mumbai, Jan. - 2008, pp. 182 - 185.
  51. T. Nagarajan, P. Vijayalakshmi and Douglas O'Shaughnessy - "Combining multiple-sized sub-word units in a speech recognition system using baseform selection" - in Proceedings of Int. Conf. on Spoken Language Processing (ICSLP), Pittsburgh, Sep. 2006, pp. 1595 - 1597.
  52. P. Vijayalakshmi, M. R. Reddy and Douglas O'Shaughnessy - "Assessment of articulatory sub-systems of dysarthric speech using an isolated-style speech recognition system" - in Proceedings of Int. Conf. on Spoken Language Processing (ICSLP), Pittsburgh, Sep. 2006, pp. 981 - 984.
  53. P. Vijayalakshmi and M.R. Reddy - "Assessment of dysarthric speech and analysis on velopharyngeal incompetence",- in Proceedings of IEEE EMBC, New York, pp. 3762 - 3765 Sep. 2006.
  54. P. Vijayalakshmi and Douglas O'Shaughnessy - "Assessment of dysarthric speech using phoneme recognition system", in IFMBE Proceedings, ICBME, Singapore, Vol.12, Dec. 2005.
  55. P. Vijayalakshmi and M. R. Reddy - "Detection of hypernasality using statistical pattern classifiers", in INTERSPEECH, Eurospeech, Lisbon, Portugal, Sep.2005, pp. 701 - 704.
  56. P. Vijayalakshmi and M. R. Reddy - "The analysis of band-limited hypernasal speech using group delay based formant extraction technique", in INTERSPEECH, Eurospeech, Lisbon, Portugal, Sep.2005, pp. 665 - 668.
  57. P. Vijayalakshmi and M.R. Reddy - "Analysis of degree of dysarthria using automatic speech recognition system", in IASTED, Innsbruck, Austria, Feb. 2005, pp. 723 - 726.
  58. P. Vijayalakshmi and M. R. Reddy - "Analysis of hypernasality by synthesis", in Proceedings of Int. Conf. on Spoken Language Processing (ICSLP), Jeju, South Korea, Oct. 2004, pp. 525 - 528.

Funded Research Projects:

  1. Project titled Speech-Input Speech-Output Communication Aid (SISOCA) for Speakers with Cerebral Palsy funded by DST-TIDE is to be carried out for a period of 2 years, from May 2017 to April 2019. A fund of 13.72 lakh is sanctioned for this project. This project aims at developing a communication aid for Dysarthric speakers. Dysarthria, a neurological speech disorder caused due to cerebral palsy, is associated with a patient's disability to communicate with the outside world. The impairment of these people may preclude them from the outside world irrespective of their potential in education and employment. Thus an augmentative and alternative communication (AAC) device, that is portable and less tiring, is an urgent requirement to support them. The current project aims at developing a speech-input speech-output communication aid (SISOCA), as an application in android-based handheld device, for dysarthric speakers. The SISOCA considers dysarthric speech as input, and error corrected synthesized speech in dysarthric speaker's own voice as output. Our endeavor is to make the device accessible by the common multitude in the Indian scenario, especially in the language Tamil.
    Project Investigators: Dr. P. Vijayalakshmi (PI), Dr. T. Nagarajan (co-PI)

  2. Project titled HMM-based Text-to-Speech Synthesis System for Malaysian Tamil funded by Murasu Systems Sdn Bhd, Malaysia is to be carried out for 9 months, from Nov. 2016 to Jul. 2017. A sum of 4 lakh has been funded for this project. The project aims at developing a small-footprint text-to-speech synthesis system for Malaysian Tamil. In this regard, a hidden Markov model-based synthesizer that is capable of producing highly intelligible speech has been developed, with Tamil data recorded from a native Malaysian speaker. This system will finally be ported in to iphones and Android devices.
    Project Investigators: Dr. T. Nagarajan (PI), Dr. P. Vijayalakshmi (Co-PI)

  3. The project titled "Development of Text-to-Speech Synthesis Systems for Indian Languages - high quality TTS and small footprint TTS integrated with disability aids", is a joint venture taken up by a consortium of 12 organizations, with IIT Madras as the head. It is funded by the Department of Electronics and Information Technology (DeitY), Ministry of Communication and Information Technology (MCIT), Government of India, and its net worth is Rs. 12.66 crores, of which SSNCE has received Rs. 77 lakh. The project primarily aims at developing small footprint text-to-speech (TTS) systems for 13 languages, namely, Hindi, Tamil, Malayalam, Telugu, Marathi, Odia, Manipuri, Assamese, Bengali, Kannada, Gujarati, Rajasthani, and Bodo. Other goals of the project include incorporating intonation and duration model for improving the quality of synthesis, developing an emotional speech synthesizer, and integrating TTS systems with OCR for reading stories online and with aids for disabilities. Specifically, SSNCE has been assigned the task of developing small footprint Tamil and bilingual (Tamil and Indian English) TTS systems. Till date, the team has developed monolingual and bilingual unit selection synthesis and HMM-based speech synthesis systems. Further polyglot HMM-based synthesizers, capable of synthesizing Tamil, Hindi, Malayalam, Telugu, and English speech have been developed, using voice conversion and speaker adaptation techniques.
    Project Investigators at SSNCE: Dr. T. Nagarajan (PI), Dr. P. Vijayalakshmi (co-PI), Dr. A. Shahina (co-PI).

  4. Project titled "Speech enabled interactive enquiry system in Tamil" funded by Tamil Virtual Academy, a Tamilnadu Government Organization, is to be carried out for 6 months starting from March 2016. A sum of Rs. 9.52 lakh is sanctioned to carry out the project. A speech-enabled inquiry system in Tamil is proposed for use in tourism/agriculture. It consists primarily of a speech recognition system (that yields the text corresponding to the given speech input), a database, and a text-to-speech synthesis system. Initially, the system prompts the user to pose a question. The user may request information regarding tourist places (such as, general information about the place, distance/directions from a place of origin to the tourist spot, etc.) or regarding agriculture (such as, the weather conditions, price of a crop in the market, etc.). The question from the user (in the form of speech) is then given to a speech recognition system, which generates the corresponding text. Once the text is obtained from the recognition system, a text-to-speech synthesis system synthesizes the corresponding speech utterance and plays it back to the user for confirmation. On confirmation, the information requested by the user is fetched from a database containing details on tourist places/agriculture. This information is then converted to speech using the text-to-speech synthesis system and played to the user.
    Project Investigators: Dr. T. Nagarajan (PI), Dr. P. Vijayalakshmi (Co-PI), Dr. B. Bharathi (co-PI), Ms. B. Sasirekha (co-PI).

  5. We carried out a project titled "Assessment and intelligibility modification of dysarthric speakers" funded by All India Council for Technical Education (AICTE). This is a three year project (Dec. 2010 - Dec. 2013) with Rs. 9 lakh funding aimed at developing a detection and assessment system by analyzing the problems related to laryngeal, velopharyngeal and articulatory subsystems for dysarthric speakers using a speech recognition system and relevant signal processing-based techniques. Using the evidence derived from the assessment system, dysarthric speech is corrected and resynthesized, conserving the speaker's identity, thereby improving the intelligibility. The acoustic analysis is validated by the instruments such as Nasometer and Electroglottograph. The complete system that can detect the multisystem dysregulation due to dysarthria followed by correction and resynthesis will improve the lifestyle of the dysarthric speaker as they will be able to communicate easily with the society without any human assistance.
    Project Investigators:Dr. P. Vijayalakshmi(PI), Dr. T. Nagarajan (co-PI)

  6. We have completed two research projects, funded by SSN Trust worth Rs. 2 lakh, titled "Design of a lab model of an improved speech processor for cochlear-implants" and "Anatomical vibration sensor speech corpus for speech applications in noisy environments" during the period Jun. 2010 - Jun. 2012. The objective of the first project is to design a lab model of the speech processor for a cochlear implant based on vocoders, so that effect of the system specific parameters, such a filter order, bandwidth etc., on speech intelligibility is analysed. The second project had an objective of building a speech corpus using throat microphone speech to develop a speaker identification system using the corpus.
    Project Investigators: Dr. P. Vijayalakshmi, Dr. T. Nagarajan and Dr. A. Shahina.

Workshops organized:

  1. Title: Winter School on Speech and Audio Processing (WiSSAP 2016)
    Organizers: Dr. Hema A. Murthy (IITM), Dr. T. Nagarajan, Dr. P. Vijayalakshmi, Dr. A. Shahina
    Venue: SSN College of Engineering
    Date: Jan. 8th - 11th 2016

  2. Title: Two day workshop on Technologies for speaker and language recognition
    Coordinators: Dr. P. Vijayalakshmi, Dr. T. Nagarajan and Ms. B. Ramani
    Venue: SSN College of Engineering
    Date: April 29th - 30th 2015

  3. Title: workshop on HMM-based speech synthesis
    Coordinators: Dr. T. Nagarajan and Dr. P. Vijayalakshmi
    Venue: SSN College of Engineering
    Date: Nov. 26th - 30th 2012
    Participants: TTS consortium members.

  4. Title:Workshop on automatic speech recognition
    Coordinators: Dr. T. Nagarajan, Dr. P. Vijayalakshmi and Dr. A. Shahina
    Venue: SSN College of Engineering, Chennai.
    Date: 26th to 29th Dec. 2010

  5. Title:Workshop on Speech Processing and its Applications
    Coordinators: Dr. T. Nagarajan and Dr. P. Vijayalakshmi
    Venue: SSN College of Engineering, Chennai.
    Date: 21st and 22nd Feb. 2008

Students Associated

Ph.D Scholars
As Supervisor
  1. B. Ramani (June 2010), "Multilingual to polyglot speech synthesis system for Indian languages by sharing common attributes", Part time. - Completed
  2. M. Dhanalakshmi (June 2013), "An Assessment and Intelligibility modification system for Dysarthric speakers", Part time.
  3. M.P. ActlinJeeva (January 2014), "Dynamic Multi-Band Filter Structures for Simultaneous Improvement of Speech Quality and Intelligibility", Full time.
  4. K. Mrinalini (Jan. 2016), "A hybrid approach for speech to speech translation system", Full time
  5. T. AMariyaCelin (Jan. 2016), "Development of an Augmentative and Alternative Communication for Severe Dysarthric Speakers in Indian Languages", Full time.
  6. T. Lavanya (Jan. 2017), "Maintaining Speech Intelligibility in Challenging Conditions", Full time.
As Co-Supervisor
  1. V. Sherlin Solomi (January 2013), "Development of an HMM-based Bilingual Synthesizer for Tamil and Indian English by Merging Acoustically Similar Phonemes", Full time.
  2. G. Anushiya Rachel (January 2013), "Estimation of Glottal Closure Instants and its Application", Full time.

Guided around 25 M.E thesis and 20 undergraduate projects.

Links


Google Scholar Icon Research Gate Icon