2007
Authors
Rebelo, C; Brito, PQ; Soares, C; Jorge, A; Brandao, R;
Publication
PROGRESS IN ARTIFICIAL INTELLIGENCE, PROCEEDINGS
Abstract
The potential value of a market segmentation for a company is usually assessed in terms of six criteria: identifiability, substantiality, accessibility, responsiveness, stability and actionability. These are widely accepted as essential criteria, but they are difficult to quantify. Quantification is particularly important in early stages of the segmentation process, especially when automatic clustering methods are employed. With such methods it is easy to produce a large number of segmentations but only the most interesting ones should be selected for further analysis. In this paper, we address the problem of how to quantify the value of a segmentation according to the criteria above. We propose several measures and test them on a case study, consisting of a segmentation of portal users.
2008
Authors
Jorge, A; Pocas, J; Azevedo, PJ;
Publication
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Abstract
Visualization in data mining is typically related to data exploration. In this chapter we present a methodology for the post processing and visualization of association rule models. One aim is to provide the user with a tool that enables the exploration of a large set of association rules. The method is inspired by the hypertext metaphor. The initial set of rules is dynamically divided into small comprehensible sets or pages, according to the interest of the user. From each set, the user can move to other sets by choosing one appropriate operator. The set of available operators transform sets of rules into sets of rules, allowing focusing on interesting regions of the rule space. Each set of rules can also be then seen with different graphical representations. The tool is web-based and dynamically generates SVG pages to represent graphics. Association rules are given in PMML format. © 2008 Springer-Verlag Berlin Heidelberg.
2006
Authors
Escudeiro, NF; Jorge, AM;
Publication
Semantics, Web and Mining
Abstract
In this paper we propose a methodology for automatically retrieving document collections from the web on specific topics and for organizing them and keeping them up-to-date over time, according to user specific persistent information needs. The documents collected are organized according to user specifications and are classified partly by the user and partly automatically. A presentation layer enables the exploration of large sets of documents and, simultaneously, monitors and records user interaction with these document collections. The quality of the system is permanently monitored; the system periodically measures and stores the values of its quality parameters. Using this quality log it is possible to maintain the quality of the resources by triggering procedures aimed at correcting or preventing quality degradation.
2005
Authors
Soares, C; Jorge, AM; Domingues, MA;
Publication
PROGRESS IN ARTIFICIAL INTELLIGENCE, PROCEEDINGS
Abstract
We propose a methodology to monitor the quality of the meta-data used to describe content in web portals. It is based on the analysis of the meta-data using statistics, visualization and data mining tools. The methodology enables the site's editor to detect and correct problems in the description of contents, thus improving the quality of the web portal and the satisfaction of its users. We also define a general architecture for a platform to support the proposed methodology. We have implemented this platform and tested it on a Portuguese portal for management; executives. The results validate the methodology proposed.
2005
Authors
Jorge, AM; Azevedo, PJ;
Publication
DISCOVERY SCIENCE, PROCEEDINGS
Abstract
In this paper we study a new technique we call post-bagging, which consists in resampling parts of a classification model rather then the data. We do this with a particular kind of model: large sets of classification association rules, and in combination with ordinary best rule and weighted voting approaches. We empirically evaluate the effects of the technique in terms of classification accuracy. We also discuss the predictive power of different metrics used for association rule mining, such as confidence, lift, conviction and chi(2). We conclude that, for the described experimental conditions, post-bagging improves classification results and that the best metric is conviction.
2002
Authors
Jorge, A; Moyle, S; Voss, A;
Publication
COLLABORATIVE BUSINESS ECOSYSTEMS AND VIRTUAL ENTERPRISES
Abstract
The basic principles of a methodology for remote collaborative data mining are proposed. Starting from CRISP-DM, a general data mining process designed to carry out data mining projects; it is described how the principles of knowledge sharing and ease of communication can be embedded in the data mining process, The aim is to allow the execution of data mining projects, with the participation of multiple experts working from distant locations. All the participants in such a project can profit from the knowledge produced by others and share their knowledge online with the other participants. The produced knowledge (for example data transformations, working hypothesis, models, results of experiments) is also stored for future inspection and use, in pursuit of organizational learning. A prototypical implementation (RAMSYS) of the remote collaborative methodology is described with examples.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.