Independent of your business, all data analysis initiative requires data at certain quality levels to produce reliable outcomes. Hence, persist large amount of data in DBMS is not enough. Data quality assessment and data integration complement each other to increase data quality to the required quality levels established by the business. The former is in charge of detecting data defects on several types of data (e.g., atemporal, temporal, spatial, semi-structured, non-structured, streaming, image), while the latter attempts to identify similarities among entities shared by different datasets. Both use a broad set of resources (e.g., machine learning, data visualizations, natural language processing, statistics) to reach their aims in conventional and Big Data environments.
Visualization is interested in mixing computer and human abilities to enable complex data or software analysis mediated to interactive visual representations. Visualizations can enhance a myriad of situations, including data quality visual assessment, education, database administration, database security, cloud administration, financial analysis, scientific analysis, knowledge exploration, software evolution, and much more . Besides, new or improved visualizations techniques (metaphors) or interactive techniques, and evaluation of visualizations solutions represent some of the promising works regarding this subject.
Computer science is a diverse field that gathers several technical, personal, and group skills. Learning such complex and interconnected skills demand much more than gis, blackboard, and presentations. The use of pedagogical methods (e.g., Problem-based Learning, pair colaboration) combined or not with educational software (e.g., simulators, intelligent interactive tutors, visualizations, animations, digital storytelling ) are a must to leverage computer science learning.
The database systems are ubiquitous to a myriad of relevant applications to handle huge data volume of different shapes. This scenario pressure for new data management approaches to deal with schema evolution, data movement, self-tunning, self-adaptation, among others. Moreover, it demands ways to expose the user to data semantic shifts .