<doi_batch xmlns="http://www.crossref.org/schema/4.4.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" version="4.4.0"><head><doi_batch_id>fb536c28-c1c7-49d6-b6cb-b4a09115985f</doi_batch_id><timestamp>20210310050455824</timestamp><depositor><depositor_name>naun</depositor_name><email_address>mdt@crossref.org</email_address></depositor><registrant>MDT Deposit</registrant></head><body><journal><journal_metadata language="en"><full_title>International Journal of Education and Information Technologies</full_title><issn media_type="electronic">2074-1316</issn><archive_locations><archive name="Portico"/></archive_locations><doi_data><doi>10.46300/9109</doi><resource>http://www.naun.org/cms.action?id=3037</resource></doi_data></journal_metadata><journal_issue><publication_date media_type="online"><month>4</month><day>30</day><year>2020</year></publication_date><publication_date media_type="print"><month>4</month><day>30</day><year>2020</year></publication_date><journal_volume><volume>14</volume><doi_data><doi>10.46300/9109.2020.14</doi><resource>http://www.naun.org/cms.action?id=23206</resource></doi_data></journal_volume></journal_issue><journal_article language="en"><titles><title>An Interactive Adaptable Learning Interfacefor E-Learning Sessions</title></titles><contributors><person_name sequence="first" contributor_role="author"><given_name>Elena</given_name><surname>Odarushchenko</surname><affiliation>Poltava State Agrarian Academy, Poltava, UKRAINE</affiliation></person_name><person_name sequence="additional" contributor_role="author"><given_name>Valentina</given_name><surname>Butenko</surname><affiliation>National Aerospace University “KhAI”, Computer Systems and Networks Department, Kharkiv, UKRAINE</affiliation></person_name><person_name sequence="additional" contributor_role="author"><given_name>Viktor</given_name><surname>Smolyar</surname><affiliation>Poltava State Agrarian Academy, Poltava, UKRAINE</affiliation></person_name><person_name sequence="additional" contributor_role="author"><given_name>Vyacheslav</given_name><surname>Kharchenko</surname><affiliation>National Aerospace University “KhAI”, Computer Systems and Networks Department, Kharkiv, UKRAINE</affiliation></person_name></contributors><jats:abstract xmlns:jats="http://www.ncbi.nlm.nih.gov/JATS1"><jats:p>Introduction of augmented reality into the E-learning systems brings a new era in personal remote education. While E-learning brings a lot of benefits for both instructor and student it also has various difficulties, such as absence of vital personal interaction that can influence on student motivation and inspiration. To decrease this gap we have started the development of an Interactive Adaptable Learning Interface (IALI) that with the help of speech synthesizers, emotions and voice recognition systems tries to adapt the ongoing E-learning session to the student behavior, simulating the way instructor adapt the lesson to the student reactions. As the IALI is initially planned for Ukrainian language students, we have made a deep analysis of nowadays applicable solutions for Ukrainian speech synthesis and recognition. In this paper we present the conducted analysis and application of the most widely applied speech synthesizers, voice and emotions recognition systems that were used to develop first version of IALI.</jats:p></jats:abstract><publication_date media_type="online"><month>11</month><day>20</day><year>2020</year></publication_date><publication_date media_type="print"><month>11</month><day>20</day><year>2020</year></publication_date><pages><first_page>115</first_page><last_page>120</last_page></pages><ai:program xmlns:ai="http://www.crossref.org/AccessIndicators.xsd" name="AccessIndicators"><ai:free_to_read start_date="2020-11-20"/><ai:license_ref applies_to="am" start_date="2020-11-20">https://www.naun.org/main/NAUN/educationinformation/2020/a282008-014(2020).pdf</ai:license_ref></ai:program><archive_locations><archive name="Portico"/></archive_locations><doi_data><doi>10.46300/9109.2020.14.14</doi><resource>http://www.naun.org/main/NAUN/educationinformation/2020/a282008-014(2020).pdf</resource></doi_data><citation_list><citation key="ref0"><unstructured_citation>Arkorful, V.,  Abaidoo, N.: The role of e-learning, the advantages and disadvantages of its adoption in Higher Education.  International  Journal  of  Education  and  Re-search 2(12), 397-410 (2014). </unstructured_citation></citation><citation key="ref1"><unstructured_citation>Holmes, B., Gardner, J.: E-Learning: concepts and practice. 1st edn. SAGE Publications Ltd., London (2006). </unstructured_citation></citation><citation key="ref2"><doi>10.1007/s00779-013-0747-y</doi><unstructured_citation>Radu,  I.:  Augmented  reality  in  education:  a  meta-review and cross-media analysis. Personal and Ubiquitous Computing 18(6), 1533-1543 (2014). </unstructured_citation></citation><citation key="ref3"><unstructured_citation>Donally,  J.:  Learning  transported:  augmented,  virtual and  mixed  reality  for  all  classrooms.  1st  edn.    ISTE, London (2018) </unstructured_citation></citation><citation key="ref4"><unstructured_citation>EECS20N:  Signals  and  systems,  parametric  speech synthesis, https://ptolemy.berkeley.edu/eecs20/speech/voder.html, last accessed 2019/10/12.</unstructured_citation></citation><citation key="ref5"><unstructured_citation>Taylor, P.: Text to speech synthesis. 1st edn. Cambridge University Press, Cambridge (2009). </unstructured_citation></citation><citation key="ref6"><unstructured_citation>Dutoit,  T.: An  introduction  to  text-to-speech  synthe-sis. 1st edn. Springer, Dordrecht (1997). </unstructured_citation></citation><citation key="ref7"><unstructured_citation>BIBLRPROG Windows, UkrVox page, https://biblprog.org.ua/ua/ukrvox,     last     accessed 2019/03/20. </unstructured_citation></citation><citation key="ref8"><unstructured_citation>Trosti, Anatol’ homepage, http://www.trosti.com.ua/ua/anatol.html,  last  accessed 2019/03/20. </unstructured_citation></citation><citation key="ref9"><unstructured_citation>CyberMova   homepage,   http://cybermova.com,  last accessed 2019/03/20. </unstructured_citation></citation><citation key="ref10"><unstructured_citation>pyttsx3 2.7 project, https://pypi.org/project/pyttsx3/2.7, last accessed 2019/04/10. </unstructured_citation></citation><citation key="ref11"><unstructured_citation>pypiwin32 219 project, https://pypi.org/project/pypiwin32/219,  last  accessed 2019/04/10.</unstructured_citation></citation><citation key="ref12"><doi>10.1109/icassp.2016.7472618</doi><unstructured_citation>Bahdanau, D.,  Chorowski, J., Serdyuk, D., Brakel, P., Bengio,  Y.:  End-to-end  attention-based  large  vocabu-lary speech recognition. In: IEEE International Conference  on  Acoustics,  Speech  and  Signal  Processing (ICASSP) Proceeding, pp. 1-8, IEEE, Shanghai, China (2016).</unstructured_citation></citation><citation key="ref13"><doi>10.1109/icassp.2018.8461972</doi><unstructured_citation>Toshniwal,  S.,  Weiss,  R.,  Sainath,  T.,  et  al.:  Multilingual speech recognition with a single end-to-end model. In:  IEEE  International  Conference  on  Acoustics, Speech  and  Signal  Processing  (ICASSP)  Proceeding, pp. 1-8, IEEE, Shanghai, China (2016).</unstructured_citation></citation><citation key="ref14"><doi>10.1109/jbhi.2017.2688239</doi><unstructured_citation>Katsigiannis,  S.,  Ramzan,  N.:  DREAMER:  a  database for emotion recognition through EEG and ECG signals from  wireless  low-cost  off-the-shelf  devices.  IEEE Journal  of  Biomedical  and  Health  Informatics  22(1), 98-107 (2017).</unstructured_citation></citation><citation key="ref15"><unstructured_citation>Cambria,  E.,  Hussain,  A.:  Sentic  computing:  a  commonsense-based  framework  for  concept-level  sentiment analysis. 1st edn. Springer, Heidelberg (2015).</unstructured_citation></citation><citation key="ref16"><unstructured_citation>CMUSphinx  Homepage,  https://cmusphinx.github.io, last accessed 2019/10/12.</unstructured_citation></citation><citation key="ref17"><unstructured_citation>Vokaturi Homepage, https://vokaturi.com, last accessed 2019/10/12.</unstructured_citation></citation></citation_list></journal_article></journal></body></doi_batch>