From bca1c5046d3baa98a686270de58c3b4a06b9a536 Mon Sep 17 00:00:00 2001
From: Deepak Raj
-
-
-
- -
diff --git a/C++/Algorithms/Sieve Algorithms/README.md b/C++/Algorithms/Sieve Algorithms/README.md deleted file mode 100644 index e69de29b..00000000 diff --git a/C++/Algorithms/SortingAlgorithms/README.md b/C++/Algorithms/SortingAlgorithms/README.md deleted file mode 100644 index e69de29b..00000000 diff --git a/C++/Data Structure/Stacks/README.md b/C++/Data Structure/Stacks/README.md deleted file mode 100644 index d631dca1..00000000 --- a/C++/Data Structure/Stacks/README.md +++ /dev/null @@ -1,16 +0,0 @@ -# Stack :- - -> Stack is a linear data structure which follows a particular order in which the operations are performed. The order LIFO(Last in First Out).Insertion and deletion happens on the same end. For array based implementation of a stack, the push and pop operations take constant time - -- [Stack :-](#stack--) - - [function of Stack :-](#function-of-stack--) - -## function of Stack :- - -> Stack are work on LIFO method. where a new element is added at one end and element removed from that end only. - -- empty() - Returns wheather the stack is empty O(1) -- size() - Return the size of the stack O(1) -- top() - Returns a reference to the top most element of stack O(1) -- push() - Adds the element at the same end in stack O(1) -- pop() - Delete the top most element of the stack O(1) diff --git a/Python/Algorithms/BackTrackingAlgorithms/README.md b/Python/Algorithms/BackTrackingAlgorithms/README.md deleted file mode 100644 index e69de29b..00000000 diff --git a/Python/Algorithms/DeepLearningAlgorithms/README.md b/Python/Algorithms/DeepLearningAlgorithms/README.md deleted file mode 100644 index e69de29b..00000000 diff --git a/Python/Algorithms/DivideAndConquer/README.md b/Python/Algorithms/DivideAndConquer/README.md deleted file mode 100644 index e69de29b..00000000 diff --git a/Python/Algorithms/DynamicPrograming/README.md b/Python/Algorithms/DynamicPrograming/README.md deleted file mode 100644 index e69de29b..00000000 diff --git a/Python/Algorithms/EularianPathAlgorithms/README.md b/Python/Algorithms/EularianPathAlgorithms/README.md deleted file mode 100644 index e69de29b..00000000 diff --git a/Python/Algorithms/MachineLearningAlgorithms/README.md b/Python/Algorithms/MachineLearningAlgorithms/README.md deleted file mode 100644 index e69de29b..00000000 diff --git a/Python/Algorithms/Maths/README.md b/Python/Algorithms/Maths/README.md deleted file mode 100644 index fa54225e..00000000 --- a/Python/Algorithms/Maths/README.md +++ /dev/null @@ -1 +0,0 @@ -Decimal to N base value convertion program added. diff --git a/Python/Algorithms/PathFindingAlgorithms/README.md b/Python/Algorithms/PathFindingAlgorithms/README.md deleted file mode 100644 index e69de29b..00000000 diff --git a/Python/Algorithms/Permutations_of_string/README.md b/Python/Algorithms/Permutations_of_string/README.md deleted file mode 100644 index e69de29b..00000000 diff --git a/Python/Algorithms/README.md b/Python/Algorithms/README.md index e3681ef0..dc862031 100644 --- a/Python/Algorithms/README.md +++ b/Python/Algorithms/README.md @@ -1,4 +1,4 @@ -# Algorithm Implementation in Python +# Algorithm Implementation in Python List of Algorithms in Python contained in this repository @@ -14,8 +14,8 @@ List of Algorithms in Python contained in this repository - [Sorting Algorithms](#sortingalgorithms) - [Finding all permutation](#permutationalgorithms) - [Spiral Matrix](#SpiralMatrix) - -### Backtracking Algorithms: + +### Backtracking Algorithms: Backtracking is a technique for solving problems recursively by trying to build a solution incrementally, one piece at a time, removing those solutions that fail to satisfy the constraints of the problem at any point of time. Example of implementation- Soduku solving. @@ -24,14 +24,14 @@ Deep learning is part of machine learning, whose methods are based on artificial Examples include- CNN, RNN, LSTM, GAN, RBM, etc. ### Divide And Conquer: -Divide and Conquer is an algorithm design paradigm based on multi-branched recursion. A divide-and-conquer algorithm works by recursively breaking down a problem into two or more sub-problems of the same or related type, until these become simple enough to be solved directly. -Example of implementation- Quick Sort, Merge Sort. +Divide and Conquer is an algorithm design paradigm based on multi-branched recursion. A divide-and-conquer algorithm works by recursively breaking down a problem into two or more sub-problems of the same or related type, until these become simple enough to be solved directly. +Example of implementation- Quick Sort, Merge Sort. ### Dynamic Programing: Dynamic Programming is primarily an optimization over plain recursion. Wherever we see a recursive solution that has repeated calls for same inputs, we can optimize it using Dynamic Programming. The idea is to store the results of subproblems, so that we do not have to re-compute them when needed later. Examples of implementation- Knapsack, Longest Common Subsequence. -### Greedy Algorithms: +### Greedy Algorithms: A greedy algorithm is a simple, intuitive algorithm that is used in optimization problems. The algorithm makes the optimal choice at each step as it attempts to find the overall optimal way to solve the entire problem. Examples of implementation- Kruskal's algorithm, Prim's algorithm. @@ -40,20 +40,20 @@ A machine learning algorithm is a method that provides systems the ability to au Examples include- Linear Regression, Logistic Regression, Naïve Bayes, KNN, etc ### Path Finding Algorithms: -Pathfinding or pathing is the plotting, by a computer application, of the shortest route between two points. It is a more practical variant on solving mazes. +Pathfinding or pathing is the plotting, by a computer application, of the shortest route between two points. It is a more practical variant on solving mazes. Example of implemntation- A* search, Dijkstra's algorithm. ### Recursion Algorithms: Recursion is a method of solving a problem where the solution depends on solutions to smaller instances of the same problem. Such problems can generally be solved by iteration, but this needs to identify and index the smaller instances at programming time. Examples of implementation- Tower of Hanoi, Tree traversals, DFS. -### Searching Algorithms: -The searching algorithms are used to search or find one or more than one element from a dataset. These type of algorithms are used to find elements from a specific data structures, which maybe sequential or not. +### Searching Algorithms: +The searching algorithms are used to search or find one or more than one element from a dataset. These type of algorithms are used to find elements from a specific data structures, which maybe sequential or not. Examples of implementation- Binary Search, Linear Search, Fibonacci Search. ### Sorting Algorithms: A Sorting algorithm is an algorithm that puts elements of a list in a certain order. The most frequently used orders are numerical order and lexicographical order. Efficient sorting is important for optimizing the efficiency of other algorithms that require input data to be in sorted lists. -Examples of implementation- Quick Sort, Merge Sort. +Examples of implementation- Quick Sort, Merge Sort. ### Finding all permutation: In mathematics, a permutation of a set is, loosely speaking, an arrangement of its members into a sequence or linear order, or if the set is already ordered, a rearrangement of its elements. The word "permutation" also refers to the act or process of changing the linear order of an ordered set. @@ -61,4 +61,3 @@ Examples of implementation-Input : str = 'ABC' Output : ABC,ACB,BAC,BCA,CAB,CB ### Spiral Matrix: The Spiral Matrix problem takes a 2-Dimensional array of N-rows and M-columns as an input, and prints the elements of this matrix in spiral order. The spiral begins at the top left corner of the input matrix, and prints the elements it encounters, while looping towards the center of this matrix, in a clockwise manner. - diff --git a/Python/Algorithms/RecursionAlgorithms/README.md b/Python/Algorithms/RecursionAlgorithms/README.md deleted file mode 100644 index e69de29b..00000000 diff --git a/Python/Algorithms/SearchingAlgorithms/README.md b/Python/Algorithms/SearchingAlgorithms/README.md deleted file mode 100644 index 19f51e76..00000000 --- a/Python/Algorithms/SearchingAlgorithms/README.md +++ /dev/null @@ -1,81 +0,0 @@ -- -
- -
diff --git a/Python/Algorithms/Sieve Algorithms/README.md b/Python/Algorithms/Sieve Algorithms/README.md deleted file mode 100644 index 6c45bb60..00000000 --- a/Python/Algorithms/Sieve Algorithms/README.md +++ /dev/null @@ -1,21 +0,0 @@ -# Algorithms - -- [Algorithms](#algorithms) - - [Sieve of Eratosthenes](#sieve-of-eratosthenes) - - [Sieve of Atkin](#sieve-of-atkin) - -## Sieve of Eratosthenes - -Sieve of Eratosthenes is a simple and ancient algorithm used to find the prime numbers up to any given limit. It is one of the most efficient ways to find small prime numbers. - -In the following algorithm, the number 0 represents a composite number. - -- To find out all primes under nn, generate a list of all integers from 2 to n. (Note: 1 is not prime) -- Start with a smallest prime number, ie p = 2. -- Mark all the multiples of pp which are less than nn as composite. To do this, mark the value of the numbers (multiples of pp) in the generated list as 0. Do not mark pp itself as composite. -- Assign the value of pp to the next prime. The next prime is the next non-zero number in the list which is greater than p. -- Repeat the process until p <= n**1/2 - -## Sieve of Atkin - -> The `Sieve of Atkin` is a modern algorithm for finding all prime number upto a specified integer. diff --git a/Python/Algorithms/SortingAlgorithms/README.md b/Python/Algorithms/SortingAlgorithms/README.md deleted file mode 100644 index e69de29b..00000000 diff --git a/Python/Algorithms/SpiralMatrix/README.md b/Python/Algorithms/SpiralMatrix/README.md deleted file mode 100644 index e69de29b..00000000 diff --git a/Python/DataStructure/LinkedListDS/__pycache__/linkedlist.cpython-37.pyc b/Python/DataStructure/LinkedListDS/__pycache__/linkedlist.cpython-37.pyc deleted file mode 100644 index a305aae75858ad73fd9a2328dbc2d658aadf0fdb..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 2316 zcma)7&u`;I6rLG7iR-56wy^B7{7|^S8ZA+8psK2Np@>TrAw}$>ZmN-$F+ +
Searching for data stored in different data structures is a crucial part of pretty much every single application.
+There are many different algorithms available to utilize when searching, and each have different implementations and rely on different data structures to get the job done.
+Being able to choose a specific algorithm for a given task is a key skill for developers and can mean the difference between a fast, reliable and stable application and an application that crumbles from a simple request.
+Linear search is one of the simplest searching algorithms, and the easiest to understand. We can think of it as a ramped-up version of our own implementation of Python's in operator.
+The algorithm consists of iterating over an array and returning the index of the first occurrence of an item once it is found.
+The time complexity of linear search is O(n), meaning that the time taken to execute increases with the number of items in our input list
+Binary search follows a divide and conquer methodology. It is faster than linear search but requires that the array be sorted before the algorithm is executed.
+Assuming that we're searching for a value val in a sorted array, the algorithm compares val to the value of the middle element of the array, which we'll call mid.
+We can only pick one possibility per iteration, and our pool of possible matches gets divided by two in each iteration. This makes the time complexity of binary search O(log n).
+Jump Search is similar to binary search in that it works on a sorted array, and uses a similar divide and conquer approach to search through it.
+It can be classified as an improvement of the linear search algorithm since it depends on linear search to perform the actual comparison when searching for a value.
+Given a sorted array, instead of searching through the array elements incrementally, we search in jumps.
+The time complexity of jump search is O(√n), where √n is the jump size, and n is the length of the list, placing jump search between the linear search and binary search algorithms in terms of efficiency.
+Fibonacci search is another divide and conquer algorithm which bears similarities to both binary search and jump search. It gets its name because it uses Fibonacci numbers to calculate the block size or search range in each step.
+Fibonacci numbers start with zero and follow the pattern 0, 1, 1, 2, 3, 5, 8, 13, 21... where each element is the addition of the two numbers that immediately precede it. +The algorithm works with three Fibonacci numbers at a time.
+The time complexity for Fibonacci search is O(log n); the same as binary search. This means the algorithm is faster than both linear search and jump search in most cases.
+Exponential search is another search algorithm that can be implemented quite simply in Python, compared to jump search and Fibonacci search which are both a bit complex. It is also known by the names galloping search, doubling search and Struzik search.
+Exponential search depends on binary search to perform the final comparison of values. The algorithm works by:
+Exponential search runs in O(log i) time, where i is the index of the item we are searching for. In its worst case, the time complexity is O(log n), when the last item is the item we are searching for (n being the length of the array).
+Interpolation search is another divide and conquer algorithm, similar to binary search. Unlike binary search, it does not always begin searching at the middle.
+The time complexity of interpolation search is O(log log n) when values are uniformly distributed. If values are not uniformly distributed, the worst-case time complexity is O(n), the same as linear search.
+Interpolation search works best on uniformly distributed, sorted arrays. Whereas binary search starts in the middle and always divides into two, interpolation search calculates the likely position of the element and checks the index, making it more likely to find the element in a smaller number of iterations.
++ +
diff --git a/docs/algorithms/sortingalgorithms/index.html b/docs/algorithms/sortingalgorithms/index.html new file mode 100644 index 00000000..9e01e162 --- /dev/null +++ b/docs/algorithms/sortingalgorithms/index.html @@ -0,0 +1,5 @@ +layout : default +title : Sorting Algorithm | PyContributors +description : Algorithms Description +image : +--- diff --git a/docs/datastructure/array/index.html b/docs/datastructure/array/index.html new file mode 100644 index 00000000..1559153e --- /dev/null +++ b/docs/datastructure/array/index.html @@ -0,0 +1,6 @@ +layout : default +title : Array | PyContributors +description : array description +image : +--- +array diff --git a/docs/datastructure/index.html b/docs/datastructure/index.html new file mode 100644 index 00000000..789bc828 --- /dev/null +++ b/docs/datastructure/index.html @@ -0,0 +1,7 @@ +layout : default +title : Data Structure | PyContributors +description : Data Structure Description +image : +--- + +data strucutre page diff --git a/docs/datastructure/linkedlist/index.html b/docs/datastructure/linkedlist/index.html new file mode 100644 index 00000000..331e7f0e --- /dev/null +++ b/docs/datastructure/linkedlist/index.html @@ -0,0 +1,5 @@ +layout : default +title : LinkedList | PyContributors +description : Linked List Description +image : +--- diff --git a/docs/index.html b/docs/index.html index 8ae4e5e7..069ca099 100644 --- a/docs/index.html +++ b/docs/index.html @@ -1,472 +1,12 @@ -- - - - - +layout : default +title : Algo&Ds | PyContributors +description : Data Structure and Algorithms +image : https://images.pexels.com/photos/265152/pexels-photo-265152.jpeg?auto=compress&cs=tinysrgb&dpr=2&h=650&w=940 +--- +
Data Structure | -C++ | -Python | -Status/Remarks | -
---|---|---|---|
Linked List | -Yes | -Yes | -Being improved #23 | -
Sets | -Yes | -Yes | -Implemented | -
Stack | -Yes | -In progress #13 | -- |
Queue | -In progress #7 | -In progress #12 | -- |
Algorithm | -C++ | -Python | -Remarks | -
---|---|---|---|
Searching | -- | - | - |
Binary Search | -No | -In progress #9 | -- |
Jump Search | -In progress #39 | -In progress #10 | -- |
Fibonacci Search | -No | -In progress #11 | -- |
- | - | - | - |
Sorting | -- | - | - |
Selection Sort | -In progress #29 | -In progress #30 | -- |
Bubble Sort | -Yes | -Yes | -- |
Insertion Sort | -In progress #2 | -Yes | -- |
Merge Sort | -In progress #3 | -Yes | -- |
Quick Sort | -In progress #4 | -Yes | -- |
Heap Sort | -In progress #5 | -In progress #6 | -- |
Radix Sort | -In progress #63 | -Yes | -- |
- | - | - | - |
Recursion | -- | - | - |
Fibonacci Numbers | -No | -Yes | -- |
Fibonacci List | -No | -Yes | -- |
Factors | -No | -Yes | -- |
Recursion | -No | -Yes | -- |
Recursive Sum | -No | -Yes | -- |
- | - | - | - |
Sieve | -- | - | - |
Sieve of Erosothenes | -No | -Yes | -- |
- | - | - | - |
Dynamic Programming | -- | - | - |
Knapsack Problem | -No | -Yes | -- |
Longest Common Subsequence | -No | -Yes | -- |
Longest Increasing Subsequence | -No | -Yes | -- |
Merge Sort | -No | -Yes | -Duplicate | -
Fibonacci Number | -No | -Yes | -Duplicate | -
Naive Pattern Search | -In progress #18 | -In progress #17 | -- |
Rabin-Karp Algorithm | -No | -- | - |
- | - | - | - |
Backtracking | -- | - | - |
Suduko Solver | -In progress #21 | -No | -- |
The Knight's Tour | -In progress #33 | -In progress #32 | -- |
Subset Sum | -In progress #36 | -In progress #35 | -- |
- | - | - | - |
Deep Learning | -- | - | - |
Activation Function | -No | -Yes | -- |
Feed Forward Normal Function | -No | -Yes | -- |
Layers | -No | -Yes | -- |
Loss Function | -No | -Yes | -- |
Optimizers | -No | -Yes | -- |
- | - | - | - |
Machine Learning | -- | - | - |
Gradient Descent | -No | -Yes | -- |
Linear Regression | -No | -Yes | -- |
Logistic Regression | -No | -Yes | -- |
Decision Tree | -No | -In progress #37 | -- |
K-Nearest Neighbours | -No | -In progress #38 | -- |