What are the limitations of Apriori approach? How these limitations can be improved? Explain

 LIMITATION AND IMPROVING APRIORI

Limitations: Apriori algorithm is easy to understand and its' join and Prune steps are easy to implement on large itemsets in large databases. Along with these advantages it has a number of limitations. These are:

1. Huge number of candidates: The candidate generation is the inherent cost of the Apriori Algorithms, no matter what implementation technique is applied. It is costly to handle a huge number of candidate sets. For example, if there are 10^4 large 1-itemsets, the Apriori algorithm will need to generate more than 10^7 candidate 2-itemsets. Moreover, for 100 itemsets, it must generate more than 2^100 which is approximately 100 candidates in total. 

2.  Multiple scans of transaction database so, to mine large data sets for long patterns this algorithm is not a good choice.

 3. When the Database is scanned to check C_k, for creating F_k, a large number of transactions will be scanned even they do not contain any k-itemset.

Methods to Improve Apriori Efficiency: 

To improve the Apriori efficiency need to reduce passes of transaction database scans, shrink number of candidates, facilitate support counting of candidates. Many methods are available for improving the efficiency of the algorithm.

1. Hash-Based Technique: This method uses a hash-based structure called a hash table for generating the k-itemsets and corresponding count. It uses a hash function for generating the table. 

2 Transaction Reduction: This method reduces the number of transactions scanned in future iterations. The transactions which do not contain K-frequent items are marked or removed because such transactions cannot contain (K+1) frequent itemsets.

3. Partitioning: This method requires only two database scans to mine the frequent itemsets. It says that for any itemset to be potentially frequent in the database, it should be frequent in at least one of the partitions of the database. In Scan 1 partition database and find local frequent patterns and In Scan 2 consolidate global frequent patterns.

4. Sampling: This method picks a random sample S from Database D and then searches for frequent itemset in S using Apriori. Scan database once to verify frequent itemsets found in sample S, only borders of closure of frequent patterns are checked. For example, check abcd instead of ab, ac, ..., etc. Scan the database again to find missed frequent patterns. It may be possible to lose a global frequent itemset. This can be reduced by lowering the min_sup.

5. Dynamic Itemset Counting: This technique can add new candidate itemsets at any marked start point of the database during the scanning of the database. Find longer frequent patterns based on shorter frequent patterns and local database partitions.

Comments

Popular posts from this blog

Suppose that a data warehouse for Big-University consists of the following four dimensions: student, course, semester, and instructor, and two measures count and avg_grade. When at the lowest conceptual level (e.g., for a given student, course, semester, and instructor combination), the avg_grade measure stores the actual course grade of the student. At higher conceptual levels, avg_grade stores the average grade for the given combination. a) Draw a snowflake schema diagram for the data warehouse. b) Starting with the base cuboid [student, course, semester, instructor], what specific OLAP operations (e.g., roll-up from semester to year) should one perform in order to list the average grade of CS courses for each BigUniversity student. c) If each dimension has five levels (including all), such as “student < major < status < university < all”, how many cuboids will this cube contain (including the base and apex cuboids)?

Suppose that a data warehouse consists of the three dimensions time, doctor, and patient, and the two measures count and charge, where a charge is the fee that a doctor charges a patient for a visit. a) Draw a schema diagram for the above data warehouse using one of the schemas. [star, snowflake, fact constellation] b) Starting with the base cuboid [day, doctor, patient], what specific OLAP operations should be performed in order to list the total fee collected by each doctor in 2004? c) To obtain the same list, write an SQL query assuming the data are stored in a relational database with the schema fee (day, month, year, doctor, hospital, patient, count, charge)

Suppose that a data warehouse consists of the four dimensions; date, spectator, location, and game, and the two measures, count and charge, where charge is the fee that a spectator pays when watching a game on a given date. Spectators may be students, adults, or seniors, with each category having its own charge rate. a) Draw a star schema diagram for the data b) Starting with the base cuboid [date; spectator; location; game], what specific OLAP operations should perform in order to list the total charge paid by student spectators at GM Place in 2004?