Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

2024

LNDb v4: pulmonary nodule annotation from medical reports

Autores
Ferreira, CA; Sousa, C; Marques, ID; Sousa, P; Ramos, I; Coimbra, M; Campilho, A;

Publicação
SCIENTIFIC DATA

Abstract
Given the high prevalence of lung cancer, an accurate diagnosis is crucial. In the diagnosis process, radiologists play an important role by examining numerous radiology exams to identify different types of nodules. To aid the clinicians' analytical efforts, computer-aided diagnosis can streamline the process of identifying pulmonary nodules. For this purpose, medical reports can serve as valuable sources for automatically retrieving image annotations. Our study focused on converting medical reports into nodule annotations, matching textual information with manually annotated data from the Lung Nodule Database (LNDb)-a comprehensive repository of lung scans and nodule annotations. As a result of this study, we have released a tabular data file containing information from 292 medical reports in the LNDb, along with files detailing nodule characteristics and corresponding matches to the manually annotated data. The objective is to enable further research studies in lung cancer by bridging the gap between existing reports and additional manual annotations that may be collected, thereby fostering discussions about the advantages and disadvantages between these two data types.

2024

Using Source-to-Source to Target RISC-V Custom Extensions: UVE Case-Study

Autores
Henriques, M; Bispo, J; Paulino, N;

Publicação
PROCEEDINGS OF THE RAPIDO 2024 WORKSHOP, HIPEAC 2024

Abstract
Hardware specialization is seen as a promising venue for improving computing efficiency, with reconfigurable devices as excellent deployment platforms for application-specific architectures. One approach to hardware specialization is via the popular RISC-V, where Instruction Set Architecture (ISA) extensions for domains such as Edge Artifical Intelligence (AI) are already appearing. However, to use the custom instructions while maintaining a high (e.g., C/C++) abstraction level, the assembler and compiler must be modified. Alternatively, inline assembly can be manually introduced by a software developer with expert knowledge of the hardware modifications in the RISC-V core. In this paper, we consider a RISC-V core with a vectorization and streaming engine to support the Unlimited Vector Extension (UVE), and propose an approach to automatically transform annotated C loops into UVE compatible code, via automatic insertion of inline assembly. We rely on a source-to-source transformation tool, Clava, to perform sophisticated code analysis and transformations via scripts. We use pragmas to identify code sections amenable for vectorization and/or streaming, and use Clava to automatically insert inline UVE instructions, avoiding extensive modifications of existing compiler projects. We produce UVE binaries which are functionally correct, when compared to handwritten versions with inline assembly, and achieve equal and sometimes improved number of executed instructions, for a set of six benchmarks from the Polybench suite. These initial results are evidence towards that this kind of translation is feasible, and we consider that it is possible in future work to target more complex transformations or other ISA extensions, accelerating the adoption of hardware/software co-design flows for generic application cases.

2024

Reagentless Vis-NIR Spectroscopy Point-of-Care for Feline Total White Blood Cell Counts

Autores
Barroso, TG; Queirós, C; Monteiro Silva, F; Santos, F; Gregório, AH; Martins, RC;

Publicação
BIOSENSORS-BASEL

Abstract
Spectral point-of-care technology is reagentless with minimal sampling (<10 mu L) and can be performed in real-time. White blood cells are non-dominant in blood and in spectral information, suffering significant interferences from dominant constituents such as red blood cells, hemoglobin and billirubin. White blood cells of a bigger size can account for 0.5% to 22.5% of blood spectra information. Knowledge expansion was performed using data augmentation through the hybridization of 94 real-world blood samples into 300 synthetic data samples. Synthetic data samples are representative of real-world data, expanding the detailed spectral information through sample hybridization, allowing us to unscramble the spectral white blood cell information from spectra, with correlations of 0.7975 to 0.8397 and a mean absolute error of 32.25% to 34.13%; furthermore, we achieved a diagnostic efficiency between 83% and 100% inside the reference interval (5.5 to 19.5 x 10(9) cell/L), and 85.11% for cases with extreme high white blood cell counts. At the covariance mode level, white blood cells are quantified using orthogonal information on red blood cells, maximizing sensitivity and specificity towards white blood cells, and avoiding the use of non-specific natural correlations present in the dataset; thus, the specifity of white blood cells spectral information is increased. The presented research is a step towards high-specificity, reagentless, miniaturized spectral point-of-care hematology technology for Veterinary Medicine.

2024

Condition Invariance for Autonomous Driving by Adversarial Learning

Autores
Silva, DTE; Cruz, RPM;

Publicação
PROGRESS IN PATTERN RECOGNITION, IMAGE ANALYSIS, COMPUTER VISION, AND APPLICATIONS, CIARP 2023, PT I

Abstract
Object detection is a crucial task in autonomous driving, where domain shift between the training and the test set is one of the main reasons behind the poor performance of a detector when deployed. Some erroneous priors may be learned from the training set, therefore a model must be invariant to conditions that might promote such priors. To tackle this problem, we propose an adversarial learning framework consisting of an encoder, an object-detector, and a condition-classifier. The encoder is trained to deceive the condition-classifier and aid the object-detector as much as possible throughout the learning stage, in order to obtain highly discriminative features. Experiments showed that this framework is not very competitive regarding the trade-off between precision and recall, but it does improve the ability of the model to detect smaller objects and some object classes.

2024

3D Modelling to Address Pandemic Challenges: A Project-Based Learning Methodology

Autores
Rocha, T; Ribeiro, A; Oliveira, J; Nunes, RR; Carvalho, D; Paredes, H; Martins, P;

Publicação
CoRR

Abstract
The use of 3D modelling in medical education is a revolutionary tool during the learning process. In fact, this type of technology enables a more interactive teaching approach, making information retention more effective and enhancing students’ understanding. 3D modelling allows for the creation of precise representations of the human body, as well as interaction with three-dimensional models, giving students a better spatial understanding of the different organs and systems and enabling simulations of surgical and technical procedures. This way, medical education is enriched with a more realistic and safe educational experience. The goal is to understand whether, when students and schools are challenged, they play an important role in addressing health issues in their community. School-led projects are directed towards educational scenarios that emphasize STEM education, tackling relevant public health problems through open-school initiatives. By implementing an educational scenario focused on 3D modelling and leveraging technology, we aim to raise community awareness on public health issues.

2024

Fabric Defect Detection and Localization

Autores
Oliveira, F; Carneiro, D; Ferreira, H; Guimaraes, M;

Publicação
ADVANCES IN ARTIFICIAL INTELLIGENCE IN MANUFACTURING, ESAIM 2023

Abstract
Quality inspection is crucial in the textile industry as it ensures that the final products meet the required standards. It helps detect and address defects, such as fabric flaws and stitching irregularities, enhancing customer satisfaction, and optimizing production efficiency by identifying areas of improvement, reducing waste, and minimizing rework. In the competitive textile market, it is vital for maintaining customer loyalty, brand reputation, and sustained success. Nonetheless, and despite the importance of quality inspection, it is becoming increasingly harder to hire and train people for such tedious and repetitive tasks. In this context, there is an increased interest in automated quality control techniques that can be used in the industrial domain. In this paper we describe a computer vision model for localizing and classifying different types of defects in textiles. The model developed achieved an mAP@0.5 of 0.96 on the validation dataset. While this model was trained with a publicly available dataset, we will soon use the same architecture with images collected from Jacquard looms in the context of a funded research project. This paper thus represents an initial validation of the model for the purposes of fabric defect detection.

  • 453
  • 4387