Duplicate
Export
Register
Untitled
1 PDF
1 Flashcard Deck
Send to Chat
AI Edit
Heading 3
Highlight
Upload a PDF by clicking the button below 👇
1 / 1
100%
View
Untitled (Press 'Tab' to generate using AI)
Study
Week 2: Blind and heuristic searches
Techniques used in search algorithms to find solutions efficiently.
Heuristic
A technique used to cut down the running time of a search algorithm by evaluating a state to determine if it is close to the end goal.
Heuristic search
A search algorithm that uses heuristics to guide the search and constrain the search space. Also known as informed search.
Blind Search
A search algorithm that does not use any information about the problem, such as the goal state or the cost of actions.
Depth-First Search (DFS)
A blind search algorithm that explores the deepest branch first and uses a Last-In-First-Out (LIFO) approach.
Breadth-First Search (BFS)
A blind search algorithm that explores all neighboring nodes first before moving to the next level and uses a First-In-First-Out (FIFO) approach.
Lowest Cost First Search (LCFS)
A blind search algorithm where the edges are not of uniform cost and the lowest-cost path is chosen.
Dijkstra Algorithm
A blind search algorithm that finds the shortest path between two nodes in a graph with non-negative edge weights.
Best-First Search
A heuristic search algorithm that uses an evaluation function to determine the best next node to explore.
A* Search
A heuristic search algorithm that combines the lowest-cost search and the best-first search by considering the cost of the path done so far and the estimated cost to the goal.
Constraint Satisfaction Problems (CSPs)
Problems that involve finding solutions that satisfy a set of constraints.
Domain
The set of possible values that a variable can take in a CSP.
Hard Constraints
Constraints that must be satisfied in a CSP and specify legal combinations of values for variables.
Soft Constraints
Constraints in a CSP that have a cost associated with each assignment of a value to a variable.
Optimization
The process of finding the best solution among a set of feasible solutions in a CSP.
Constrained Optimization Problem
A problem that involves satisfying a set of hard constraints while trying to optimize a cost function.
Knapsack Problem
A problem where there is a knapsack of capacity M and N items with weight and cost. The goal is to select items to maximize the total cost while not exceeding the knapsack capacity.
Traveling Salesman Problem (TSP)
A problem where the goal is to find the shortest possible route that visits all cities and returns to the starting city.
Sudoku
A puzzle that involves filling a 9x9 grid with numbers so that each column, each row, and each of the nine 3x3 subgrids contain all of the numbers from 1 to 9.
Greedy Search
A search algorithm that makes locally optimal choices at each step based on a strategy or heuristic. It may not lead to the globally optimal solution.
Randomized Search
A search algorithm that introduces randomness in the search process, such as random restart, random step, or random modification.
Simulated Annealing
A randomized search algorithm that uses a cooling schedule to gradually reduce the probability of accepting worse solutions.
Genetic Algorithm
A population-based algorithm that explores multiple solutions in parallel, mimicking the process of natural selection and evolution.
Constraint Propagation
A technique used in CSPs to reduce the search space by reasoning about the constraints and propagating the effects of variable assignments.
Generate and Test
A problem-solving strategy that generates possible solutions and tests them against a set of constraints or criteria.
Adversarial Searches
Search algorithms used in competitive environments, where multiple agents have opposite goals and try to outsmart each other.
Minimax
An adversarial search algorithm that considers all possible outcomes of a game and chooses the move that minimizes the maximum possible gain for the opponent.
Alpha-Beta Pruning
An optimization technique used in adversarial search algorithms to reduce the number of nodes evaluated by eliminating branches that are guaranteed to be worse than other branches.
Expectimax
A search algorithm used in games where there is an element of chance or uncertainty. It calculates the expected utility of each move instead of assuming the most optimal outcome.
Expectiminimax
A search algorithm that combines elements of minimax and expectimax algorithms to handle games with both deterministic and chance moves.
Variables
Crossover: Exchange portions from both parents Mutation: Modify an element of an individual
Tournament-based selection
Algorithm: 1. Select k individuals from the population and perform a tournament amongst them 2. Select the best individual from the k individuals 3. Repeat process 1 and 2 until you have the desired amount of population
Adversarial Searches
Zerosum game Agents have opposite utilities values on outcomes One agent wants to maximize while the other wants to minimize Adversarial pure competition, ex. chess
Evaluation Functions
Minimax: The maximizer chooses the maximum value out of the possible minimum values AlphaBeta Pruning: alpha (smallest value accepted by the maximizer), beta (largest value accepted by the minimizer) Expectimax: Calculate the expected reward (expected utility) of the change nodes
Expectiminimax
Mix of min-max and chance nodes Adding a random element to the game, perhaps between the max and min
Week 4 Introduction to Machine Learning
ML: Supervised Machine Learning System: Learning phase Training data: Features vector Algorithm: Model Inference from model: Test data Model Prediction Annotation: The manual process for labeling data before training Unbalanced training set: The training data needs to be representative of the training data, avoid biases
Discrete Features
Take finite values in a specific range (e.g. number of students in a classroom) Cannot be broken down into smaller parts Consists of distinct or unconnected elements Typically do not change with time Easily visualized using bar charts or pie charts Is counted and is not measured
Continuous Features
Take up any value in a specified range (e.g. weight of students in a class) Can be broken down into smaller parts Consists of connected elements Generally change with time Easily visualized using histograms and scatter plots Is measured and is not counted
One-hot
Encoding method used to represent categorical variables Each category is mapped to a binary vector Only one element is 1, the rest are 0
Discriminative Learners
Supervised ML models that learn the mapping from inputs to outputs Focuses on learning the decision boundary between different classes
Evaluation Methods
X-fold Cross-Validation: The training of the learner will be done x times, each time with 1/x of the training data available Confusion Matrix: Precision, Recall, Macro-average, Micro-average
Naive Bayes
Bayes Rule and Equations: 1. P(A|B) = (P(B|A) * P(A)) / P(B) 2. P(B|A) = (P(A|B) * P(B)) / P(A) 3. P(A|B) = (P(B|A) * P(A)) / (P(B|A) * P(A) + P(B|not A) * P(not A)) General Form of Bayes Theorem: p(hi|E) = (p(E|hi) * p(hi)) / (Sum of p(E|hj) * p(hj))
Prior Probability
Probability generally an unconditioned probability of a hypothesis or an event Symbolized by p(E) The prior probability of a hypothesis is symbolized by p(hi)
Posterior Probability
Probability generally a conditional probability of the hypothesis given some evidence The posterior probability of a hypothesis given some evidence is symbolized by p(hi|E)
Conditional Independence Assumption
Since the denominator P(E) is the same for all classes, we only calculate the numerator P(hi|E) * P(E|hj) Independent events: 2 events A and B are independent if and only if the probability of their joint occurrence is equal to the product of their individual separate occurrence Conditionally Independent Events: 2 events A and B are conditionally independent of each other given a third event C if and only if P(A|BC) = P(A|C) * P(B|C)
Week 5 Intro to Neural Networks
Artificial Intelligence vs Machine Learning vs Deep Learning Neural Network: A computing system made up of a number of simple highly interconnected processing elements which process information by their dynamic state response to external inputs Dynamic response: Will adapt to the weighted sum and make modifications accordingly Non-Linear Transformations: Sigmoid function Deep Learning: Refers to the number of layers in the neural network
Supervised Machine Learning Models
Regression: Prediction Classification: Mapping input data to a category or label
Scholarly Assistant's Insights
Flashcard deck covering various search algorithms, constraint satisfaction problems, optimization, adversarial searches, genetic algorithms, and machine learning concepts.
Machine Learning
Search Algorithms
Constraint Satisfaction Problems
Adversarial Searches
Genetic Algorithms
+14 more
Ask Scholarly Assistant
Similar Pages
Login to Leave a Comment
Give your feedback, or leave a comment on a page to share your thoughts with the community.
Login