- A
- B
- C
- D
- E
- F
- G
- H
- I
- K
- L
- M
- N
- O
- P
- Q
- R
- S
- T
- U
- V
- W
A test applied to data for atomicity, consistency, isolation, and durability.
Reports generated for a one-time need.
The attempt to reach a specific audience with a specific message, typically by either contacting them directly or placing contextual ads on the Web.
Collecting data from various databases for the purpose of data processing or analysis.
A mathematical formula placed in software that performs an analysis on a set of data.
Using software-based algorithms and statistics to derive meaning from data.
Software or software and hardware that provides the tools and computational power needed to build and perform many different analytical queries.
The process of identifying rare or unexpected items or events in a dataset that do not conform to other items in the dataset.
The severing of links between people in a database and their records to prevent the discovery of the source of the records.
An abbreviation for Application Program Interface. a set of programming standards and instructions for accessing or building web-based software applications.
Software that is designed to perform a specific task or suite of tasks.
The apparent ability of a machine to apply information gained from previous experience accurately to new situations in a way that a human would.
Any method of automatically identifying and collecting data on items, and then storing the data in a computer system. For example, a scanner might collect data about a product being shipped via an RFID chip.
Using data about people’s behavior to understand intent and predict future actions.
This term has been defined in many ways, but along similar lines. Doug Laney, then an analyst at the META Group, first defined big data in a 2001 report called “3-D Data Management: Controlling Data Volume, Velocity and Variety.” Volume refers to the sheer size of the datasets. The McKinsey report, “Big Data: The Next Frontier for Innovation, Competition, and Productivity,” expands on the volume aspect by saying that, “’Big data’ refers to datasets whose size is beyond the ability of typical database software tools to capture, store, manage, and analyze.”Velocity refers to the speed at which the data is acquired and used. Not only are companies and organizations collecting more and more data at a faster rate, they want to derive meaning from that data as soon as possible, often in real time. Variety refers to the different types of data that are available to collect and analyze in addition to the structured data found in a typical database. Barry Devlin of 9sight Consulting identifies four categories of information that constitute big data: 1. Machine-generated data. This includes RFID data, geolocation data from mobile devices, and data from monitoring devices such as utility meters. 2. Computer log data, such as clickstreams from websites. 3. Textual social media information from sources such as Twitter and Facebook. 4. Multimedia social and other information from Flickr, YouTube, and other similar sites. IDC analyst Benjamin Woo has added a fourth V to the definition: value. He says that because big data is about supporting decisions, you need the ability to act on the data and derive value.
The use of technology to identify people by one or more of their physical traits.
The act of monitoring your brand’s reputation online, typically by using software to automate the process.
A unit that represents a very large number of bytes. Brontobyte has been proposed for a unit of measure for data beyond yottabyte scale, but is not yet an officially recognized unit.
The general term used for the identification, extraction, and analysis of data.
CDRs contain data that a telecommunications company collects about phone calls, such as time and length of call. This data can be used in any number of analytical applications.
A popular choice of columnar database for use in big data applications. It is an open source database managed by The Apache Software Foundation.
Cell phones generate a tremendous amount of data, and much of it is available for use with analytical applications.
Data analysis for the purpose of assigning the data to a particular group or class.
The analysis of users’ Web activity through the items they click on a page.
Clojure is a dynamic programming language based on LISP that uses the Java Virtual Machine (JVM). It is well suited for parallel data processing.
A broad term that refers to any Internet-based application or service that is hosted remotely.
Data analysis for the purpose of identifying similarities and differences among data sets so that similar data sets can be clustered together.
A database that stores data by column rather than by row. In a row-based database, a row might contain a name, address, and phone number. In a column-oriented database, all names are in one column, addresses in another, and so on. A key advantage of a columnar database is faster hard disk access.
Data analysis that compares of two or more data sets or processes to identify patterns in large data sets.
Keeping tabs of competitors’ activities on the Web using software to automate the process.
CEP is the process of monitoring and analyzing all events across an organization’s systems and acting on them when necessary in real time.
Structured data that comprise two or more interrelated parts and, therefore, are difficult for structured query languages and tools to process.
A digital library of historical environmental data from satellites operated by the U.S. National Oceanic and Atmospheric Association (NOAA).
Any data generated by a computer rather than a human–a log file for example.
The ability to execute multiple processes at the same time.
The act of making an intuition-based decision appear to be data-based.
Software that facilitates the management and publication of content on the Web.
refers to any of a broad class of statistical relationships involving dependence. Familiar examples of dependent phenomena include the correlation between the physical statures of parents and their offspring, and the correlation between the demand for a product and its price.
A means to determine a statistical relationship between variables, often for the purpose of identifying predictive factors among the variables.
Analysis that can attribute sales, show average order value, or the lifetime value.
The act of submitting a task or problem to the public for completion or solution.
Software that helps businesses manage sales and customer service processes.
A graphical reporting of static or real-time data on a desktop or mobile device. The data represented is typically high-level to give managers a quick report on status or performance.
A quantitative or qualitative value. Common types of data include sales figures, marketing research results, readings from monitoring equipment, user actions on a website, market growth projections, demographic information, and customer lists.
The act or method of viewing or retrieving stored data.
The Digital Accountability and Transparency Act of 2014. The U.S. law is intended to make information on federal government expenditures more accessible by requiring the Treasury Department and the White House Office of Management and Budget to standardize and publish U.S. federal spending data.
The act of collecting data from multiple sources for the purpose of reporting or analysis.
A person responsible for the tasks of modeling, preparing, and cleaning data for the purpose of deriving actionable information from it.
The application of software to derive information or meaning from data. The end result might be a report, an indication of status, or an action taken automatically based on the information received.
How enterprise data is structured. The actual structure or design varies depending on the eventual end result required. Data architecture has three stages or processes: conceptual representation of business entities. the logical representation of the relationships among those entities, and the physical construction of the system to support the functionality.
A physical facility that houses a large number of servers and data storage devices. Data centers might belong to a single organization or sell their services to many organizations.
The act of reviewing and revising data to remove duplicate entries, correct misspellings, add missing data, and provide more consistency.
Any process that captures any type of data.
A person responsible for the database structure and the technical environment, including the storage of data.
The notion of making data available directly to workers throughout an organization, as opposed to having that data delivered to them by another party, often IT, within the organization.
The data that a person creates as a byproduct of a common activity–for example, a cell call log or web search history.
A means for a person to receive a stream of data. Examples of data feed mechanisms include RSS or Twitter.
A set of processes or rules that ensure the integrity of the data and that data management best practices are met.
The process of combining data from different sources and presenting it in a single view.
The measure of trust an organization has in the accuracy, completeness, timeliness, and validity of the data.
A non-profit international organization for technical and business professionals “dedicated to advancing the concepts and practices of information and data management.”
A place where people can buy and sell data online.
The access layer of a data warehouse used to provide data to users.
The process of moving data between different storage types or formats, or between different computer systems.
The process of deriving patterns or knowledge from large data sets.
A data model defines the structure of the data for the purpose of communicating between functional and technical people to show data needed for business processes, or for communicating a plan to develop how data is stored and accessed among application development team members.
An individual item on a graph or a chart.
The process of collecting statistics and information about data in an existing source.
The measure of data to determine its worthiness for decision making, planning, or operations.
The process of sharing information to ensure consistency between redundant sources.
The location of permanently stored data.
A recent term that has multiple definitions, but generally accepted as a discipline that incorporates statistics, data visualization, computer programming, data mining, machine learning, and database engineering to solve complex problems.
A practitioner of data science.
The practice of protecting data from destruction or unauthorized access.
A collection of data, typically in tabular form.
Any provider of data–for example, a database or a data stream.
A person responsible for data stored in a data field.
A specific way of storing and organizing data.
The process of abstracting different data sources through a single data access layer.
A visual abstraction of data designed for the purpose of deriving meaning or communicating information more effectively.
A place to store data for the purpose of reporting and analysis.
Using data to support making crucial decisions.
A digital collection of data and the structure around which the data is organized. The data is typically entered into and accessed via a database management system (DBMS).
A person, often certified, who is responsible for supporting and maintaining the integrity of the structure and content of a database.
A database hosted in the cloud and sold on a metered basis. Examples include Heroku Postgres and Amazon Relational Database Service.
Software that collects and provides access to data in a structured format.
The act of removing all data that links a person to a particular piece of information.
IBM’s weather prediction service that provides weather data to organizations such as utilities, which use the data to optimize energy distribution.
Data relating to the characteristics of a human population.
A data cache that is spread across multiple systems but works as one. It is used to improve performance.
A file system that enables sharing by being mounted on multiple servers simultaneously.
A software module designed to work with other distributed objects stored on other computers.
The execution of a process across multiple computers connected by a computer network.
The practice of tracking and storing electronic documents and scanned images of paper documents.
An open source distributed system for performing interactive analysis on large-scale datasets. It is similar to Google’s Dremel, and is managed by Apache.
An open source search engine built on Apache Lucene.
A digitized health record meant to be usable across different health care settings.
A software system that allows an organization to coordinate and manage all its resources, information, and business functions.
Shows the series of steps that led to an action.
One million terabytes, or 1 billion gigabytes of information.
An approach to data analysis focused on identifying general patterns in data, including outliers and features of the data that are not anticipated by the experimenter’s current knowledge or preconceptions. EDA aims to uncover underlying structure, test assumptions, detect mistakes, and understand relationships between variables.
Data that exists outside of a system.
A process used in data warehousing to prepare data for use in reporting or analytics.
The automatic switching to another computer or node should one fail.
A US federal law that requires all federal agencies to meet certain standards of information security across its systems.
A computing architecture that moves cloud computing services, including analytics, communications, storage, etc., closer to users and/or data sources through a geographically distributed network of devices.
The use of gaming techniques in non-game applications, such as motivating employees and encouraging desired customer behaviors. Data analytics often is applied to gamification, for example, for audience segmentation, to inform techniques to better encourage desired behaviors, and to personalize rewards for optimal results.
A type of NoSQL database that uses graph structures for semantic queries with nodes, edges, and properties to store, map, and query relationships in data.
The performing of computing functions using resources from multiple distributed systems. Grid computing typically involves large files and are most often used for multiple applications. The systems that comprise a grid computing network do not have to be similar in design or in the same geographic location.
An open source software library project administered by the Apache Software Foundation. Apache defines Hadoop as “a framework that allows for the distributed processing of large data sets across clusters of computers using a simple programming model.”
A fault-tolerant distributed file system designed to run on low-cost commodity hardware, written in Java for the Hadoop framework.
A software/hardware in-memory computing platform from SAP designed for high-volume transactions and real-time analytics.
A distributed columnar NoSQL database.
HPC systems, also called supercomputers, are often custom built from state-of-the-art technology to maximize compute performance, storage capacity and throughput, and data transfer speeds.
A SQL-like query and data warehouse engine.
An open source, distributed SQL query engine for Hadoop.
The integration of data analytics into the data warehouse.
The storage of data in memory across multiple servers for the purpose of greater scalability and faster access or analytics.
Any database system that relies on memory for data storage.
The practice of collecting, managing, and distributing information of all types–digital, paper-based, structured, unstructured.
The network of physical objects or “things” embedded with electronics, software, sensors and connectivity to enable it to achieve greater value and service by exchanging data with the manufacturer, operator and/or other connected devices. Each thing is uniquely identifiable through its embedded computing system but is able to interoperate within the existing Internet infrastructure.
LinkedIn’s open-source message system used to monitor activity events on the web.
Any delay in a response or delivery of data from one point to another.
Any computer system, application, or technology that is obsolete, but continues to be used because it performs a needed function adequately.
As described by World Wide Web inventor Time Berners-Lee, “Cherry-picking common attributes or languages to identify connections or relationships between disparate sources of data.”
The process of distributing workload across a computer network or computer cluster to optimize performance.
Location analytics brings mapping and map-driven analytics to enterprise business systems and data warehouses. It allows you to associate geospatial information with datasets.
Data that describes a geographic location.
A file that a computer, network, or application creates automatically to record events that occur during operation–for example, the time a file is accessed.
A term coined by mathematician and network scientist Samuel Arbesman that refers to “datasets that have massive historical sweep.”
The use of algorithms to allow a computer to analyze data for the purpose of “learning” what action to take when a specific pattern or event occurs.
Any data that is automatically created from a computer process, application, or other non-human source.
A general term that refers to the process of breaking up a problem into pieces that are then distributed across multiple computers on the same network or cluster, or across a grid of disparate and possibly geographically separated systems (map), and then collecting all the results and combines them into a report (reduce). Google’s branded framework to perform this function is called MapReduce.
The process of combining different datasets within a single application to enhance output–for example, combining demographic data with real estate listings.
The act of processing of a program by breaking it up into separate pieces, each of which is executed on its own processor, operating system, and memory.
Master data is any non-transactional data that is critical to the operation of a business–for example, customer or supplier data, product information, or employee data. MDM is the process of managing that data to ensure consistency, quality, and availability.
Any data used to describe other data–for example, a data file’s size or date of creation.
An open-source NoSQL database managed by 10gen.
A database optimized to work in a massively parallel processing environment.
The act of breaking up an operation within a single computer system into multiple threads for faster execution.
A type of database that stores data as multidimensional arrays, or “cubes,” as opposed to the rows and column sotrage structure of relational databases. This enables data to be analyzed from different angles for complex queries and analytical processing (OLAP) applications.
The ability of a computer program or system to understand human language. Applications of natural language processing include enabling humans to interact with computers using speech, automated language translation, and deriving meaning from unstructured data such as text or speech data.
A class of database management system that does not use the relational model. NoSQL is designed to handle large data volumes that do not follow a fixed schema. It is ideally suited for use with very large data volumes that do not require the relational model.
A database management system in which information is represented as objects, rather than data such as integers or numbers, as used in object-oriented programming. Also called Object Database Management Systems (ODBMS).
The process of analyzing multidimensional data using three operations: consolidation (the aggregation of available), drill-down (the ability for users to see the underlying details), and slice and dice (the ability for users to select subsets and view them from different perspectives).
The process of providing users with access to large amounts of transactional data in a way that they can derive meaning from it.
A consortium of global IT organizations whose goal is to speed the migration of cloud computing.
The open source version of Google’s Big Query java code. It is being integrated with Apache Drill.
A collaborative organization initiated in IBM in 2013 as part of its effort to open up its Power Architecture products to a collaborative development approach. The foundation’s goal, according to its mission statement, is “to create an open ecosystem, using the POWER Architecture to share expertise, investment, and server-class intellectual property to serve the evolving needs of customers and industry.”
A location to gather and store data from multiple sources so that more operations can be performed on it before sending to the data warehouse for reporting.
Breaking up an analytical problem into smaller components and running algorithms on each of those components at the same time. Parallel data analysis can occur within the same system or across multiple systems.
Allows programming code to call multiple functions in parallel.
The ability to execute multiple tasks at the same time.
A query that is executed over multiple system threads for faster performance.
The classification or labeling of an identified pattern in the machine learning process.
The process of monitoring system or business performance against predefined goals to identify areas that need attention.
One million gigabytes or 1,024 terabytes.
A data flow language and execution framework for parallel computation.
Using statistical functions on one or more datasets to predict trends or future events.
The process of developing a model that will most likely predict a trend or outcome.
The process of analyzing a search query for the purpose of optimizing it for the best possible result.
An open source software environment used for statistical computing.
A technology that uses wireless communications to send information about an object from one point to another.
A descriptor for events, data streams, or processes that have an action performed on them as they occur.
An algorithm that analyzes a customer’s purchases and actions on an e-commerce site and then uses that data to recommend complementary products.
The process of managing an organization’s records throughout their entire lifecycle, from creation to disposal.
Data that describes an object and its properties. The object may be physical or virtual.
The presentation of information derived from a query against a dataset, usually in a predetermined format.
The application of statistical methods on one or more datasets to determine the likely risk of a project, action, or decision.
The process of determining the main cause of an event or problem.
Google’s procedural domain-specific programming language designed to process large volumes of log records.
The ability of a system or process to maintain acceptable performance levels as workload or scope increases.
The structure that defines the organization of data in a database system.
The process of locating specific data or content using a search tool.
Aggregated data about search terms used over time.
A project of the World Wide Web Consortium (W3C) to encourage the use of a standard format to include semantic content on websites. The goal is to enable computers and other devices to better process data.
Data that is not structured by a formal data model, but provides other means of describing the data and hierarchies.
The application of statistical functions on comments people make on the web and through social networks to determine how they feel about a product or company.
A physical or virtual computer that serves requests for a software application and delivers those requests over a network.
The smart grid refers to the concept of adding intelligence to the world’s electrical transmission systems with the goal of optimizing energy efficiency. Enabling the smart grid will rely heavily on collecting, analyzing, and acting on large volumes of data.
An electrical meter that monitor and report energy usage and are capable of two-way communication with the utility.
Application software that is used over the web by a thin client or web browser. Salesforce is a well-known example of SaaS.
Also called a solid-state disk, a device that uses memory ICs to persistently store data.
Any means of storing data persistently.
An open-source distributed computation system designed for processing multiple data streams in real time.
Data that is organized by a predetermined structure.
A programming language designed specifically to manage and retrieve data from a relational database system.
1,000 gigabytes.
The application of statistical, linguistic, and machine learning techniques on text-based sources to derive meaning or insight.
Data that changes unpredictably. Examples include accounts payable and receivable data, or data about product shipments.
As more data becomes openly available, the idea of proprietary data as a competitive advantage is diminished.
Data that has no identifiable structure – for example, the text of email messages.
The practice of changing price on the fly in response to supply and demand. It requires real-time monitoring of consumption and supply.
Real-time weather data is now widely available for organizations to use in a variety of ways. For example, a logistics company can monitor local weather conditions to optimize the transport of goods. A utility company can adjust energy distribution in real time.
An integrated data management system that allows geophysicists, engineers, and financial managers in the oil and gas industry evaluate the potential of oil and gas fields.