Software by Semantic analysis and Sentiment of large volumes of open texts.
The Semantic Analysis and Sentiment algorithms analyze the words of the original texts to find out which themes go through them, how they are connected to each other and how they change over time.
These algorithms allow to overcome the bias manual classification of data by a team of assessors, as well as significantly reducing the broad effort required.
The software facilitates and reduces the manual analysis work, providing an objective summary on which the evaluators can work directly on the interpretation of the results.
How to use SemantiCase:
- to intercept the actually relevant arguments for people, even identifying new or latent topics from their first evidence;
- to measure the graduality of opinions (positive or negative) stratified by categories of people, by individual, by topic / topic by adopting a measurement calibrated on internal communication;
- to understand the "sensitive" and influential issues on opinion;
- to identify the typical communicative characteristics of each category of people in order to harmonize communication with the specific language of the various groups;
- to monitor over time the more or less influential themes on people's judgment.
Examples of application of SemantiCase:
- QUESTIONNAIRES AND SURVEY ANALYSIS
- TEST ANALYSIS AND EXERCISES
- BASIC KNOWLEDGE DOCUMENTAL SEARCH ENGINE
- CUSTOMER CARE DATA ANALYSIS (COMPLAINTS)
- ANALYSIS OF JUDGMENTS IN ONLINE COMMUNITY
MEASURING EMPLOYEE SENTIMENT IN ENEL'S CLIMATE SURVEY
Analyzing the open comments to understand the nuances of judgment of the employees in the climate survey was the goal of Enel's collaboration on the research project in Piazza Copernico.
The number of comments collected in very large organizations requires a considerable effort and time for reading, as well as a significant agreement in the analysis criteria by the evaluators. For this reason Enel has decided to experiment with Piazza Copernico the application of semantic techniques and sentiment analysis to understand, measure and compare opinions in a statistically valid way and reducing the overall effort.
To achieve the goal, the following were applied:
- the semantic algorithm based on Structural Topic Model;
- the sentiment analysis algorithm.
The first allowed to identify the most representative content structure on a probabilistic basis. Through the reasoned reading of the structure of the elaborated topics it was possible to identify the "hot" issues and understand all the associated meanings.
Subsequently, the sentiment analysis was conducted for each comment, that is, the verification of the polarity (positive or negative) of the judgments expressed in the text comments. This analysis made it possible to understand with which intrinsic evaluation the texts had been written. Furthermore, different sentiment indices were drawn up by role, age, gender, seniority and team leadership, thus being able to evaluate the different judgments for each category.
In conclusion, this analysis made it possible to read the open contents of the climate survey through a significant synthesis and better understand its associated judgment and the communicative forms in which it is expressed.
Per PIAZZA COPERNICO LAB, this application was significant for the application of semantic algorithms in the field of open questionnaires (training and otherwise), surveys, community analysis, contests, and any other area of written expression of content and opinions.
DLA - DATA LEARNING ANALYTICS
BIG DATA AND E-LEARNING STATISTICAL INDICES
Dashboard to analyze the progress of courses, with the possibility of comparing different editions of a course or organization chart nodes, with the ability to analyze recurring patterns and classify behavior on courses.
The Learning Analytics tool can be used both in mode stand alone which integrated into the corporate platform, and can process data from third party sources such as third party LMS platforms.
The assumption of the tool is that the participant's training experience does not end in the launch and study of individual teaching materials, but is a more complex process in which the platform as a relationship environment with other participants and / or with a tutor supporting the study, or even with scheduled messaging, represents a system to be understood more deeply. In addition to these aspects, other "gray areas" are represented by the impact of personal variables, the characteristics of the course, the time available with respect to the effectiveness of online training.
The tool implements an analysis system that gives the opportunity to understand:
- the effective use of a course, to identify not only study styles and preferences, but also the critical areas of the course itself (didactic meta-evaluation);
- user behavior in relation to a specific course to evaluate the effectiveness of design and organizational choices;
- the most effective teaching solutions, attention to the different types of course, the use of teaching time, the impact of variables external to the courses.
This analysis on the data allows to correlate the different variables present in LMS, overcoming a logic of vertical consultation of the reports in favor of a three-dimensional exploration of the implicit phenomena, to fully understand the value of the teaching experience.
It is possible to integrate the performance indices of course participants (based on 7 macro-variables) with a predictive system of the results to support the management of the course delivery phase (training forecasting) and with any adaptivity rules.
ARTIFICIAL TUTORING SYSTEM CHATTERBOT
Virtual assistant system (automatic teaching tutor), configurable according to the needs of each project.
The virtual Tutor, programmed through systems of Machine Learning, it is currently being tested, and will be able to communicate with the learner in solving problems, also by interfacing with the semantic engine for the exploration of the knowledge base.
The virtual tutor will dialogue with the participant, both on how to use the teaching system and on specific content.
As a matter of fact, when fully operational it will represent an interface between the didactic system and the participant to guarantee a continuous 24/XNUMX learning support service.
Project aimed at the construction of a ADynamic ssessment for the personalization of the evaluation test.
The project is characterized as a system of adaptive testing oriented to administer complex evaluation tests of a dynamic type capable of adapting the route of administration of the questions based on the result achieved in progress during the test.
During the administration of the questionnaire, it allows to quickly consolidate the results on the areas of greatest competence, and to investigate in greater depth the areas of lesser preparation in order to effectively discriminate the levels of knowledge of the medium and low performer.
Provides a system of back-office for the comparison of the results of the respondents.
Thanks to a partnership agreement with a leading company of the adaptive media, our LMS platform Labe-l Academy has been integrated with the system MorphCast, an editor that combines interactive videos, AI, Machine Learning and recognition of facial expressions, as well as attention control, without using personal data and therefore in full compliance with Privacy.
The main applications in E-Learning:
- Support for authentication processes in LMS: the learner can avoid entering credentials and log on to the platform through facial recognition;
- introduction in the WBT of presence and attention control.