BLOG – prova

News

Machine Learning for Early Prediction of CognitiveDecline in Alzheimer’s…

Machine Learning for Early Prediction of Cognitive Decline in Alzheimer’s disease

Background: Alzheimer’s disease (AD) is a progressive neurodegenerative disorder characterized by a long preclinical phase during which cognitive impairment develops gradually. Current therapeutic strategies are only able to slow disease progression, making early identification of subjects at risk a crucial challenge. Mild cognitive impairment (MCI) represents an intermediate stage between normal cognition and dementia and is widely recognized as a key target for early diagnosis and preventive intervention. However, early clinical manifestations are often subtle and insufficient for reliable diagnosis. In this context, Machine Learning (ML) techniques, especially when combined with multimodal biomarkers, offer promising tools to improve early prediction of cognitive decline.

Objectives: The main objective of this study is to develop an interpretable Machine Learning model capable of predicting the transition from cognitively normal (CN) status to mild cognitive impairment (MCI) using baseline multimodal biomarkers. By leveraging data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI), the study aims to:

 

  1. identify the most informative biomarkers associated with early cognitive decline;
  2. assess the predictive performance of a Random Forest classifier;
  3. support early risk stratification in individuals without overt clinical symptoms.

Methods: The study is based on data from the ADNI Merge dataset, which includes longitudinal observations from more than 2,400 subjects characterized by demographic variables, cognitive assessments, cerebrospinal fluid biomarkers, PET imaging, and radiomic features. Only subjects CN at baseline with at least one follow-up visit were considered, focusing on stability versus progression to MCI. A structured preprocessing pipeline was implemented. Age at each visit was reconstructed, categorical variables were encoded, and diagnostic labels were harmonized. Missing values were systematically analyzed to assess data quality, as can be seen in Figure 1.

Figure 1. Distribution of missing values across dataset features, highlighting the need for feature selection and imputation.

Features with more than 33% missing values were removed, while remaining missing data were imputed using a k-Nearest Neighbors approach. Exploratory analysis was performed through correlation matrices to investigate relationships between biomarkers.

Figure 2. Correlation matrix including CSF analysis, radiomic features and cognitive tests.

Statistical analysis based on ANOVA was applied to identify variables showing significant differences between CN and MCI groups. An example is shown in Figure 3.

Figure 3. Boxplot of baseline RAVLT_immediate scores for CN and MCI subjects, showing statistically significant group differences.

Finally, an interpretable Random Forest classifier was then trained using baseline features, with hyperparameters optimized to balance sensitivity and specificity.

Results: The proposed Random Forest model achieved an overall classification accuracy of 76%, with a sensitivity of 64% and a specificity of 84% in predicting progression from CN to MCI. Exploratory and statistical analyses highlighted that cognitive test scores (e.g., RAVLT, ADAS, mPACC) provided the strongest discriminative power, supported by neurostructural radiomic features such as ventricular and hippocampal volumes.

Conclusions: This study demonstrates the effectiveness of an interpretable Machine Learning approach for the early prediction of cognitive decline in Alzheimer’s disease. By integrating multimodal baseline biomarkers, the proposed Random Forest model provides reliable performance in identifying individuals at risk of progression to MCI. The results support the use of data-driven methods to enhance early diagnosis and risk stratification, contributing to the development of personalized and preventive strategies in the management of neurodegenerative diseases.

References:

[1] L. De Palma, A. Di Nisio, A. M. Lucia Lanzolla, P. Matarrese, E. M. Pich and F. Attivissimo, “Machine Learning for Early Prediction of Cognitive Decline in Alzheimer’s Disease,” 2025 IEEE Medical Measurements & Applications (MeMeA), Chania, Greece, 2025, pp. 1-6, doi: 10.1109/MeMeA65319.2025.11068006.

Uncategorized

Enhancing ABP Estimation through Comprehensive PPG Signal Analysis and…

Enhancing ABP Estimation through Comprehensive PPG Signal Analysis and Advanced Loss Function Optimization

Background: Blood pressure (BP) is a fundamental physiological parameter for the diagnosis and management of cardiovascular diseases, particularly hypertension. Conventional cuff-based measurement techniques provide only intermittent values and are not suitable for continuous monitoring. Photoplethysmography (PPG) has emerged as a promising non-invasive technique for continuous BP monitoring, especially in wearable devices and telemedicine applications. However, accurate arterial blood pressure (ABP) estimation from PPG signals remains challenging due to noise, motion artifacts, and strong inter-subject variability. Deep Learning (DL) models offer powerful tools to capture the complex nonlinear relationship between PPG and ABP signals, but their performance is strongly influenced by the choice of the loss function.

Objectives: The main objective of this study is to enhance the accuracy and clinical relevance of ABP estimation from PPG signals by introducing a dedicated loss function that incorporates physiological knowledge. Specifically, the study aims to:

  1. develop DL-based regression models for ABP estimation using raw PPG signals;
  2. design a novel loss function that emphasizes systolic and diastolic peaks of the ABP waveform;
  3. evaluate the proposed approach in an inter-subject framework to assess its robustness and generalization capability.

Methods: The proposed methodology is based on data extracted from the MIMIC-III waveform database, which contains synchronized PPG and ABP signals acquired from intensive care unit patients. After subject selection, signals were preprocessed through temporal alignment, noise reduction, normalization, and quality assessment. The overall preprocessing and dataset construction workflow is summarized in Figure 1.

Figure 1. Workflow of the processing for MIMIC-III dataset.

Several DL architectures were implemented, including Residual U-Net and Long Short-Term Memory (LSTM) networks adapted for one-dimensional signal regression. Models were trained using the Adam optimizer and evaluated using mean absolute error and root mean squared error metrics. To improve prediction accuracy at clinically relevant points, the proposed Peak Enhancing Loss Function (PELF) was introduced. The loss function assigns higher weights to systolic and diastolic points of the ABP waveform, as illustrated by the weighting profile in Fig. 2, thus guiding the model toward more accurate peak reconstruction.

Figure 2. Weight of error for loss function computation, compared with the ground truth ABP signal.

Results: The performance of the proposed Peak Enhancing Loss Function (PELF) was quantitatively evaluated and compared with the conventional Mean Squared Error (MSE) loss using a Residual U-Net architecture. The comparison was carried out on the test set considering two different signal durations, 8.192 s (Dataset A3) and 30 s (Dataset A4), using PPG signals and their derivatives.

Table 1. Comparison of results between the use of MSE and PELF loss functions.

The results reported in Table 1 show that the adoption of PELF leads to an overall improvement in BP estimation accuracy with respect to MSE. In particular, for the 30 s signal configuration (Dataset A4), PELF achieves the best performance, with a reduction of both RMSE and MAE for systolic and diastolic pressure estimation. Specifically, the RMSE for systolic pressure decreases from 17.75 mmHg (MSE) to 17.38 mmHg (PELF), while the MAE for systolic pressure is reduced from 14.13 mmHg to 13.67 mmHg. A similar improvement is observed for diastolic pressure, where the MAE decreases from 6.01 mmHg to 5.72 mmHg. For the 8.192 s configuration (Dataset A3), PELF provides slightly improved systolic error metrics and marginal variations in diastolic RMSE compared to the use of MSE. These results indicate that the proposed loss function is particularly effective when longer signal segments are available, allowing the model to better exploit temporal information.

Conclusions: This study highlights the importance of loss function optimization in DL–based non-invasive BP estimation. By integrating physiological knowledge into the training process, the proposed approach improves the accuracy and reliability of ABP predictions from PPG signals, supporting the development of cuffless BP monitoring systems suitable for telemedicine and continuous health monitoring applications.

References:

[1] Luisa De Palma, Gregorio Andria, Filippo Attivissimo, Anna Maria Lucia Lanzolla, Attilio Di Nisio, Enhancing ABP estimation through comprehensive PPG signal analysis and advanced loss function optimization, Measurement, Volume 256, Part B, 2025, 118210, ISSN 0263-2241, https://doi.org/10.1016/j.measurement.2025.118210.

Uncategorized

Preliminary study on alternative magnetic layout (AML) for tokamaks…

A new approach to tokamaks: the study on the Alternative Magnetic Layout (AML) and the TRUST project

Research on nuclear fusion aims to make reactors more compact, efficient, and easier to build. In this context, a group of researchers from the University of Tuscia has published a preliminary study in [1] introducing a new magnetic confinement scheme, called Alternative Magnetic Layout (AML).
The idea is innovative: moving the central solenoid (CS) around the central column of the toroidal field coils (TF). This configuration reduces the reactor’s radial size, making it more compact. To compensate for the increased magnetic field required, high-temperature superconductors (HTS) are envisioned for the coils, capable of carrying higher currents and sustaining stronger fields compared to traditional superconductors.
The study shows that this solution is technically feasible, although engineering challenges remain in terms of assembly, electromagnetic force management, and integration of components. Some of the advantages include:

  • better efficiency in the use of internal space;
  • the possibility of placing poloidal field coils closer to the plasma, improving control and stability;
  • potential cost reduction and easier maintenance.
    This concept will be tested in the Tuscia Research University Small Tokamak (TRUST), a new university tokamak under construction in Viterbo. TRUST will be a flexible, low-cost experimental platform designed to train future fusion engineers and test innovative technologies and materials.
    In conclusion, AML represents a promising proposal for the future of fusion reactors: a more compact design, aimed at both academic research and technology transfer to industry.

References:

News

Which Are the Needs of People with Learning Disorders…

A new scientific paper [1] is published in collaboration between the University of Tuscia and the Blue Cinema TV company concerning the use of new immersive technologies with the aim of make learning more inclusive.
The paper firstly describe the life-size, interactive virtual human beings OLOS® character that serve as storytellers and guides in museums, providing immersive experiences without the need for sensory filters like VR headsets. OLOS® integrates audiovisual interfaces, natural language processing, and IoT capabilities, allowing users to interact via voice, touchscreen or tangible interfaces linked to real objects. The patented system delivers high-definition, life-size visualizations using an optical apparatus that creates a holographic illusion. OLOS® supports multilingual functionality and accessibility for users with disabilities, making it a scalable and sustainable solution for cultural institutions. Its event manager processes interactions using speech recognition, Q&A engines, and video responses, enhancing engagement with museum visitors. The system is already widely applied in cultural heritage and is now being expanded to support individuals with specific learning disorders.

Second, an analysis is made on two separate questionnaires, the first [2] was developed during the European project “VRAILexia” while the second created ad-hoc for this project. The VRAILexia self-analysis questionnaire was constructed as a result of informal interviews conducted on a group of dyslexic volunteers. This interview asked what were the major difficulties they encountered during their schooling, what tools they found most useful and what strategies they found most effective in dealing with them, with the aim of developing “BESPECIAL,” an artificial intelligence-based platform to recommend tools and strategies to facilitate the study of students with SLDs. The questionnaire was then submitted to more than 800 students with SLD certification. The second questionnaire, on the other hand, was used to investigate some of the youngsters preferences regarding their experiences in museums and immersive realities. The questions regarding museums ranged between preferences and difficulties, thus going into how attractive these places could be to the younger generation and whether they found it difficult to find the right information. Finally, the section dealing to immersive realities served to explore the knowledge of the youngsters regarding these new technologies and their opinion of their use in learning contexts.

Thanks to this analysis, new tools were introduced within the OLOS® system that can totally customize the experience according to their needs. Among these tools, we find some available on one’s own device or on the GUI related to the installation, such as: concept maps, to graphically display the narrators’ explanations; an illustrated dictionary, built using animations combined with simplified language; videos, to delve deeper into the topics covered; and keywords, to help fix concepts. Finally, it will be possible to activate an enriched fruition of images and videos that will appear within the holographic system, thus helping users in memorizing concepts. All these tools will be able to be recommended directly by the system having provided for the implementation of the BESPECIAL platform within OLOS®.

References
[1] Materazzini, Michele, et al. “Which Are the Needs of People with Learning Disorders for Inclusive Museums? Design of OLOS®—An Innovative Audio-Visual Technology.” Applied Sciences 14.9 (2024): 3711.
[2] Zingoni, Andrea, et al. “Investigating issues and needs of dyslexic students at university: Proof of concept of an artificial intelligence and virtual reality-based supporting platform and preliminary results.” Applied Sciences 11.10 (2021): 4624.

News

Anthropology of the Algorithm: how stereotypes, biases, and cultural…

The world of artificial intelligence (AI) is rapidly evolving, but it is not without its complex challenges. The new video article titled Anthropology of the Algorithm: How Stereotypes, Biases, and Cultural Belonging Influence AI, presented by Professor Alessandra Castellani, sheds light on a crucial and often overlooked aspect of AI development: the influence of cultural biases.

In her work, Professor Castellani explores how AI is not a neutral entity but rather the product of the choices, experiences, and beliefs of its creators. Through an anthropological lens, she examines how algorithms can reflect and amplify stereotypes and biases already present in human societies.

For example, many facial recognition systems have shown lower accuracy rates when applied to individuals with darker skin tones, a problem attributed to unbalanced datasets and a lack of attention to cultural diversity during their design.

Professor Castellani places particular emphasis on the ethical responsibility of developers and tech companies. “We cannot treat AI as a tool isolated from human reality,” Castellani highlights in the video. “Every algorithm is born within a cultural context that influences its design, implementation, and even its use.”

The video article is rich with practical examples, including case studies on how cultural belonging and social dynamics influence key decisions in algorithm design. Castellani invites reflection on the importance of building inclusive AI systems that respect diversity.

The central message of the video is clear: the tech community must recognize and address the role of stereotypes and cultural biases in AI. Only through a conscious and multidisciplinary approach can technologies avoid perpetuating social inequalities or injustices.

Professor Castellani’s contribution is a call to action, not only for experts in the field but also for the entire academic and industrial community.

The video article Anthropology of the Algorithm is available on the official Res4Net website and on major academic community channels. We encourage everyone interested to watch it and join the discussion on how to make AI a truly fair and inclusive tool.

Author of the Video Article: Prof. Alessandra Castellani.

Published on: Res4Net Official Channel

Duration: 19 minutes


Language: Italian (subtitles available in English)

Don’t miss this opportunity to discover a new perspective on one of the most influential technologies of our time.

News

University of Messina in the SAMOTHRACE Project

SAMOTHRACE (Sicilian Micro and Nano Technology Research and Innovation Center) is a project funded by the Italian National Recovery and Resilience Plan (PNRR) with approximately 120 million euros. It involves 28 partners, including the Sicilian universities, research institutes and industry leaders. The aim is to create a strong collaboration between experts in microelectronics, microsystems, materials and microtechnologies, with a focus on Sicily but with an eye on the global market.

The SAMOTHRACE project addresses the European Commission’s “Digital, Industry & Space” challenge, while also focusing on other key areas such as health, energy, mobility, agriculture and the environment. It supports several “Global Sustainable Development Goals”, such as promoting sustainable agriculture, improving health, achieving gender equality, ensuring access to clean energy, and promoting sustainable industrial growth.

The project started on October 2022 and will last for three years, ending on September 2025.

The University of Messina (UniME) plays a key role in the project, leading Spoke 2, which focuses on advanced systems and sensor technologies.

During the first two years of the project, UniME contributed to building knowledge and skills in micro- and nanotechnologies. This included hiring new researchers and offering PhD scholarships to train experts and develop human resources. Unime also launched funding programs to support companies and organizations engaged in industrial R&D in areas such as energy, environment, health, agriculture and smart mobility.

On November 20-22 2024, the second year review meeting of the SAMOTHRACE project was held in Palermo. Participants, including UniME, presented their research and the results achieved during the year. A video summarizing these activities is available on YouTube.

Another important meeting will take place in Palermo in March 2025. It will focus on the activities carried out and the innovative prototypes developed in the project. Detailed information about this meeting will be made available shortly on the official SAMOTHRACE website: www.samothrace.eu.

News

Non-contact Measurement of Intraocular Pressure (IOP) Via Corneal Deformation…

The Electrical and Electronic Measurement Research Group of Politecnico di Bari, in collaboration with the Optics BioTech Lab of the University of Maryland, has proposed a novel approach for assessing the performance of eye blink dynamics related to intraocular pressure (IOP). This joint research project aims to develop a non-contact method for measuring IOP, thus improving current techniques that need direct contact with the eye and limiting the clinical requirements for IOP assessment.

Reference: Non-contact Measurement of Intraocular Pressure (IOP) Via Corneal Deformation Induced by Natural Blinking (optica.org)

The study of eye blink dynamics is critical for understanding various ocular conditions, especially in the context of intraocular pressure (IOP). Traditional methods of assessing IOP are invasive and require direct contact with the eye. This research explores a non-contact method for evaluating eye blink dynamics to infer IOP, based on the force exerted by the eyelid during blinking.

The imaging system used in this study consists of an ophthalmology slit lamp equipped with an RGB camera, capable of capturing images at 130 frames per second (FPS). The camera is positioned orthogonally to the participant’s line of sight to capture lateral eye images. The images are acquired with a field of view of 510×638 pixels.

We sought a natural method to increase IOP in a healthy participant to investigate the difference in eye-blink dynamic between normal and elevated IOP. The Valsalva maneuver was identified as an appropriate solution, as it naturally elevates IOP. This maneuver involves forceful expiration against a closed glottis. During the experiment, participants were instructed to blow for 15 seconds while maintaining a pressure of 40 mmHg, measured with an analog manometer. The increase in IOP due to the Valsalva maneuver was verified using the iCare IC200 portable tonometer. Typically, IOP increases from 18 to 25 mmHg in a healthy participant. The experiment included two eye-blinking sessions with the same participant. In the first session, 17 normal blinks and 13 Valsalva-induced blinks were recorded, while the second session included 10 normal blinks and 10 Valsalva-induced blinks.

We hypothesize that the eye behaves like a spring, where the displacement is directly proportional to the force applied. Specifically, when intraocular pressure (IOP) is elevated, the eye is subjected to greater force, resulting in faster movement than normal IOP levels. To analyze this, we fit the corneal displacement using a first-order system response. Thus, the metrics used to see if there is a statistically significant difference between normal and Valsalva blinks are widely employed for first-order systems:

  1. Time constant : indicates how quickly the eye opens.
  2. Rise time: measures the time required for the eye to transit from a partially open state (10 %) to a nearly fully open state (90%).
  3. Bilinear approximation: we approximate the first order system response using two lines and , where  and  are the slopes and the intersection point of these two lines has coordinates

To determine if there is a statistically significant difference between normal and Valsalva blinks, we performed a two-sample t-test with unequal variance and a significance level of . As anticipated, the dynamics of normal blinks are slower than those of Valsalva blinks. Specifically, the mean values of the time constant, rise time, and the abscissa of the intersection point for normal blinks are higher than those for Valsalva blinks, while the opposite is true for the slope of the first line of the bilinear approximation, . Additionally, the mean values of these metrics for the first session are close to those of the second session, strongly indicating the repeatability of this process.

 

Figure 1 - Imaging Acquisition System
Figure 2 - Results
News

New Advances in Microplastic Detection:An Affordable Approach Using Transmitted…

Plastic pollution has become a global environmental crisis that has reached critical levels in recent years. The extensive use of plastic in various sectors such as food packaging, electronics and construction has resulted in an unprecedented amount of plastic waste being generated and dispersed into the environment. This plastic waste can now be found in the most remote regions of the world, including the oceans, where it poses a significant threat to marine life and ecosystems. The presence of plastic debris has the potential to permanently disrupt the natural balance of ecosystems, harming both humans and wildlife.

Identifying, quantifying and classifying microplastics is essential to addressing this critical and challenging problem. However, the detection of microplastics is a complex task due to their small size, low concentration, and the intricate nature of their physical and chemical properties.

Recent advances in analytical techniques such as Raman spectroscopy and Fourier transform infrared (FTIR) spectroscopy have shown promise in the detection and characterization of microplastics. Despite their effectiveness, the measurement time for these techniques can be relatively long, making them less suitable for high-throughput applications. In addition, the cost of equipment and maintenance, as well as the need for specialized training, are significant barriers to the widespread use of these methods.

A recent scientific article, “Microplastic Identification in Marine Environments: A Low-Cost and Effective Approach Based on Transmitted Light Measurements” proposes a compact and affordable measurement system for the easy identification of microplastics in marine environments. This study, carried out by a research group from the University of Messina, uses transmitted light to identify microplastic debris, providing a simple and effective method for material characterization.

The proposed system consists of a single-board computer equipped with a programmable display and a digital microscope. The LCD serves as a programmable light source, while the digital microscope records and analyzes changes in the spectrum of light transmitted by the samples under test (SUT). These SUTs are placed on a clear glass slide in the optical path between the LCD and the detector (the digital microscope). The system estimates the amount of light transmitted through the samples and uses this information to identify and classify microplastics.

The results of this research have been published in Vol. 13 No. 2 (2024) of the Acta IMEKO journal. The full paper is available here

News

Effects of seismic isolation on the dynamic behaviour of…

A new scientific paper [1] has been published by the UNITUS Nuclear Fusion Research team, concerning the Italian facility “Divertor Tokamak Test” (DTT) [2]. The aim of this work is to offer a different perspective on seismic isolation, prioritizing stress reduction over examining relative displacements between components. However, further exploration of this other crucial issue is postponed to a later stage. The paper provides a detailed description of the seismic isolation system’s modelling and methodology, along with an evaluation of two different locations for the isolators. The cryostat base consists of a horizontal plate to which the vacuum vessel is fixed. It also houses a reinforcing ring to which the magnets gravity supports are fixed. Beneath the plate are six radial beams, each one supported by two pillars anchored to the ground. A bracing system connects each pair of pillars. Two different seismic isolation solutions were analysed. The first one by placing the isolation plane at the base of the pillars (BISO) and the second one by placing it at the top of the pillars (TISO), below the radial beams. Considering the importance class IV and a peak acceleration between 0.15 g and 0.2 g the seismic hazard level assigned to this project is high. These considerations were the input for the calculation of the seismic local response. The horizontal and vertical spectra used in the analyses are that of the so-called “Collapse Limit State (SLC)”, the most severe case in terms of seismic action with a return period of 1950 years. The behaviour of the toroidal complex of the DTT fusion machine has been analysed in using both static and spectral analyses based on the most severe local seismic response spectrum of the Frascati site. Given the high stresses on the cryostat base under these conditions two seismic isolation were studied. Both systems showed significant benefits with respect to the non-isolated configuration, reducing the maximum stresses by an order of magnitude. Of the two isolated configurations, TISO is the one that guarantees lower stresses in the cryostat base, despite the difference is quite low. Taking into account the seismic isolation, a comparison was also made between the analytical results obtained using a simplified 1 “degree of freedom” (DOF) model and the numerical results in terms of horizontal displacements of the system. The numerical results did not differ significantly from the analytical results, up to a maximum value of 15% in the BISO configuration, making the 1 DOF model a valid option for first-attempt calculations. The paper highlights some preliminary results without identifying an integrated solution implementing real seismic isolators. Further design and analysis activities will be performed concerning the isolation devices, the control of undesirable movements, differential seismic ground motions, the control of displacements relative to surrounding ground and constructions.

Figure 1: FE model used for the analyses.
News

SPECKLE PATTERN ACQUISITION AND STATISTICAL PROCESSING FOR ANALYSIS OF…

Speckle pattern (SP) is the granular visual pattern that generates when highly coherent optical radiation is shined towards an object with a non-uniform structure (i.e., characterized by irregularities with dimension of the same order of magnitude of the wavelength). If captured by an imaging system, SP images appear as a disorder and chaotic sequence of bright spots and dark regions. However, their statistical properties are highly correlated with the structure of the object that acts as SP source. Traditionally, SP-based techniques have been mainly employed to measure the roughness of surfaces and to study thermal and mechanical properties of solid specimen. However, also fluid suspensions (such as animal and plant-based milks) do generate SP, since they are constituted by scattering elements that float in a surrounding liquid matrix. SP generated by a turbid liquid is particularly challenging to be analysed since suspended particles are subject to Brownian motion and the produced pattern is time-variant. Also, for this reason only very recently, few works (yet very preliminary) are appearing in the scientific literature on the use of SP imaging to investigate turbid liquids.

In this work, the LabEO team has developed a simple low-cost configuration based on a semiconductor laser and a PC-interfaced CMOS camera to acquire SP images generated by irradiating scattering fluids. Whereas, in general in other works, demonstration of the analytical technique has been carried out only on phantom suspensions prepared ad-hoc in the laboratory, samples tested in this work were obtained by water dilution of commercial rice milk that naturally contains lipid micelles acting as scattering particles. After acquisition, SP frames were elaborated to extract statistical parameters that can be correlated with the particulate content. Preliminary results show that the proposed technique allows to easily identify samples with different vegetable milk concentration.

In more detail, the setup investigated at LabEO is low-cost and very simple to be operated: it comprises a semiconductor red laser diode (for SP excitation) and a monochrome CMOS camera (for acquisition of SP images). Light is emitted at the wavelength of 658 nm, with optical power of about 20 mW. The radiation is shined onto a plastic cuvette containing the sample at an angle of about 30° and the camera is placed in front of the sample, at a distance of about 16 cm, making the setup quite compact and suitable to realize a portable sensing system.

Experimental measurements were carried out on nine different suspensions obtained by dilution, with deionized water, of commercial rice milk containing 11 g/L of lipids and 130 g/L of carbohydrates. For every sample tested, 100 SP frames were acquired with the CMOS camera and then processed in MATLAB environment to extract statistical features. For each SP image, expressed as a gray-level matrix of pixels, the average gray-level intensity, the mode and the median were retrieved. It was found that the relationship of these three parameters versus concentration follows a linear behaviour for rice milk concentrations lower than 80% v/v, whereas it tends to saturate for higher concentrations. Such non-linear behaviour can be explained by considering the two counteracting effects. Indeed, when water is added to milk, the concentration of scattering elements decreases, inducing a decrease of the recorded average intensity. On the other hand, by adding water, the refractive index difference between the lipid vesicles (that is around 1.42–1.45 RIU) and the surrounding matrix increases, which would lead to a higher collected intensity. Hence, it was observed that the average gray-level intensity, the mode and the median can be efficiently used for sample distinction only for highly diluted milks.

To extract further information from SP images, the histograms of the gray level distribution were computed and the kurtosis and skewness were retrieved. It was observed that the shape and the characteristics of the histograms depends on the rice milk concentration: in particular, the values of skewness and kurtosis are monotonically increase functions of rice milk concentration. Hence, these two parameters can be used for identifying milk dilutions without ambiguity even when the quantity of added water is smaller than 20% v/v.

In conclusion, this work presents a cost-effective easy-to-use optical setup for identification of turbid samples by means of SP imaging. The presented results are very promising and suggest the possibility of extracting interesting statistical parameters from SP images. Among future perspectives, there is surely the need for investigating more complex statistics (such as gray level co-occurrence matrices and geometrical properties of SP grains) and the interest in applying artificial intelligence tools for the automatic recognition of samples. Since the proposed detection technique is contactless, remote, and label-free, a very interesting application will be the recognition of different types of milk and the identification of their adulteration.

More information on the researches carried out at the Laboratory of ElectroOptics can be found here

References

[1]   V. Bello, E. Bodo and S. Merlo, “Speckle Pattern Acquisition and Statistical Processing for Analysis of Turbid Liquids”, IEEE Transactions on Instrumentation and Measurement, Vol. 72, pp. 1-4, 2023, Art no. 7005004. DOI: 10.1109/TIM.2023.3289543

[2] W. Goodman, “Statistical properties of laser speckle patterns,” in Laser Speckle and Related Phenomena. Berlin, Germany: Springer, 1975, pp. 9–75

[2]  B. M. Oliver, “Sparkling spots and random diffraction,” Proceedings of IEEE, Vol. 51, no. 1, pp. 220–221, 1963

Figure 1: Schematic representation of the optical configuration for SP excitation and acquisition of SP images and of the workflow for extraction of statistical features for identification of rice milk samples.