Datasets:

id
int64
1
120
name
stringlengths
3
28
full_name
stringlengths
6
32
before
stringlengths
64
6.66k
after
stringlengths
72
6.88k
tests
stringlengths
80
9.12k
instruction_descriptive
stringlengths
84
1.01k
instruction_lazy
stringlengths
30
640
taxonomy
dict
10
csv_parser
10_csv_parser
class CSVParser: def __init__(self, csv: str): self.csv = csv def contents(self) -> list[list[str]]: lines = self.csv.split("\n") output = [] for line in lines: output.append(line.split(",")) return output
class CSVParser: def __init__(self, csv: str): self.csv = csv def contents(self) -> list[list[str]]: lines = self.csv.split("\n") output = [] for line in lines: output.append(line.split(",")) return output def header(self) -> list[str]: lines = s...
### START TESTS ### if True: # pragma: no cover parser = CSVParser('''bim,boom,bam,bap duck,duck,goose,duck 1,0,1,0''') p2 = CSVParser('''''') p3 = CSVParser('''thing''') p4 = CSVParser('''thing1, thing2 a, a''') p5 = CSVParser(''', ,''') assert parser.contents() == [["bim", "boom", "bam", "b...
Add a function called `header` which returns the first row of a csv file as a list of strings, where every element in the list is a column in the row.
Add a method called `header` which returns the header of a csv file as a list
{ "change_kind": "adaptive", "libraries": [], "topic": "Language" }
11
fibonacci
11_fibonacci
class Fib: def __iter__(self): self.prev_prev = 0 self.prev = 1 return self def __next__(self): output = self.prev + self.prev_prev self.prev_prev = self.prev self.prev = output return output
class Fib: def __init__(self): self.prev = 0 self.prev_prev = 1 def __iter__(self): self.prev_prev = 0 self.prev = 1 return self def __next__(self) -> int: output = self.prev + self.prev_prev self.prev_prev = self.prev self.prev = output ...
### START TESTS ### if True: # pragma: no cover f = Fib() iterator = iter(f) assert next(iterator) == 1 assert next(iterator) == 2 assert next(iterator) == 3 assert next(iterator) == 5 iterator = iter(f) assert next(iterator) == 1 assert next(iterator) == 2 assert next(iterator...
add a method `next_n_fibs(n: int)` which takes in an integer, and produces a list containing the next `n` integers in the fibonacci sequence starting from what the object would return if its `__next__` method was called. The method should not mutate the state of the object. When asked for the next fibonacci number aft...
create a function `next_n_fibs` which takes an integer `n` and produces a list containing the next `n` numbers in the sequence. the `Fib` object should not have its state changed by this function.
{ "change_kind": "adaptive", "libraries": [], "topic": "DSA" }
13
maze_solver
13_maze_solver
from typing import List, Literal, Tuple from queue import PriorityQueue Move = Literal["up", "down", "left", "right"] # 0 = up, 1 = down, 2 = left, 3 = right MoveIndex = Literal[0, 1, 2, 3] # 0 = empty, 1 = wall, 2 = start, 3 = end Cell = Literal[0, 1, 2, 3] class Maze: def __init__(self, maze: List[List[Cell]])...
from typing import List, Literal, Tuple from queue import PriorityQueue Move = Literal["up", "down", "left", "right"] # 0 = up, 1 = down, 2 = left, 3 = right MoveIndex = Literal[0, 1, 2, 3] # 0 = empty, 1 = wall, 2 = start, 3 = end Cell = Literal[0, 1, 2, 3] class Maze: def __init__(self, maze: List[List[Cell]])...
### START TESTS ### if True: # pragma: no cover exp, path = Maze([ [2, 0, 0, 1, 0], [1, 1, 0, 1, 0], [0, 0, 0, 0, 0], [1, 1, 1, 1, 0], [3, 0, 0, 0, 0], ]).solve() assert exp == 14 assert path == [(0, 0), (0, 1), (0, 2), (1, 2), (2, 2), (2, 3), ...
Change the `solve` function in the `Maze` class to use A* with manhattan distance as the heuristic instead of using Uniform Cost Search (UCS). The manhattan distance heuristic is mathematically defined as follows: `h(n) = |n.x - goal.x| + |n.y - goal.y|`; Where `n` is the current node and `goal` is the goal node.
Change the `solve` function to use A* with manhattan distance instead of using UCS.
{ "change_kind": "perfective", "libraries": [], "topic": "DSA" }
14
matrix_operations
14_matrix_operations
class Matrix: def __init__(self, matrix: list[list[int]]): self.matrix = matrix def add(self, other): result = [] for i in range(len(self.matrix)): row = [] for j in range(len(self.matrix[0])): row.append(self.matrix[i][j] + other.matrix[i][j]) ...
class Matrix: def __init__(self, matrix: list[list[int]]): self.matrix = matrix def add(self, other): if self.same_size(self.matrix, other.matrix): result = [] for i in range(len(self.matrix)): row = [] for j in range(len(self.matrix[0]))...
### START TESTS ### if True: # pragma: no cover m1 = [ [1, 2, 3], [4, 5, 6], [7, 8, 9] ] m2 = [ [9, 9, 9], [8, 8, 8], [0, 1, -2] ] m3 = [ [-1, 5, 0], [2, -8, 7], [4, 3, -2], [0, 6, 1] ] mat1 = Matrix(m1) ...
Modify the Matrix class to check that the matrices received are of the same size before subtracting or adding them. This should be done with a helper function 'same_size' that returns true if the matrices have the same dimension.
Edit the methods add and subtract to check that dimension of matrices match using a helper method named 'same_size'.
{ "change_kind": "perfective", "libraries": [], "topic": "Math" }
15
pandas_random_data
15_pandas_random_data
import pandas as pd import random import string class GradeManipulator: def __init__(self): self.data = self._generate_random_data() def _generate_random_data(self): names = [''.join(random.choices(string.ascii_uppercase, k=5)) for _ in range(100)] ages = [random.ran...
import pandas as pd import random import string class GradeManipulator: def __init__(self): self.data = self._generate_random_data() def _generate_random_data(self): names = [''.join(random.choices(string.ascii_uppercase, k=5)) for _ in range(100)] ages = [random.ran...
### START TESTS ### if True: # pragma: no cover random.seed(42) dm = GradeManipulator() assert dm.data.shape == (100, 4), "Data shape is not as expected." top_3_scorers = dm.top_scorers(3) assert top_3_scorers.shape[0] == 3, "top_scorers does not return the correct number of top scorers." asse...
Add two methods to the `GradeManipulator` class: 1. `average_score_by_grade(self)` - returns a DataFrame of the average "Score" column for each category of "Grade" (i.e., "A", "B", "C", "D", and "F"). Do not reset the index. 2. `top_scorers(self, n)` - returns a DataFrame of the n students with the highest "Score" valu...
Add two methods to the grade manipulator: `average_score_by_grade` and `top_scorers(n)`, which returns a data frame of the average score for each grade and a data frame of the top n students, respectively.
{ "change_kind": "adaptive", "libraries": [ "pandas" ], "topic": "Math" }
16
interpreter
16_interpreter
""" A programming language interpreter for the following language: expr ::= expr <binop> expr | <number> | <name> | var <name> = <expr> in <expr> binop ::= + | - """ from abc import ABC, abstractmethod class AST(ABC): @abstractmethod def eval(self, env) -> int: pass class BinOp(AST): def __init_...
""" A programming language interpreter for the following language: expr ::= expr <binop> expr | <number> | <name> | var <name> = <expr> in <expr> binop ::= + | - | * | / """ from abc import ABC, abstractmethod class AST(ABC): @abstractmethod def eval(self, env) -> int: pass class BinOp(AST): def...
### START TESTS ### if True: # pragma: no cover assert Number(1).eval({}) == 1 assert BinOp(Number(1), "+", Number(2)).eval({}) == 3 assert BinOp(Number(1), "-", Number(2)).eval({}) == -1 assert BinOp(Number(1), "*", Number(2)).eval({}) == 2 assert BinOp(Number(30), "*", Number(2)).eval({}) == 60 ...
Add two new operations to the AST of the programming language: "*" and "/". The `eval` method in the `BinOp` class should evaluate the two operands and return the result of the operation. "*" should multiply the operands, and "/" should perform integer division on the operands (i.e. the result should be the floored quo...
Add multiplication ("*") and integer division ("/") to the programming language. Throw a zero division error when necessary.
{ "change_kind": "adaptive", "libraries": [], "topic": "Language" }
17
quiz
17_quiz
class Quiz: def __init__(self, questions, answers): self.questions = questions self.answers = answers self.total_questions = len(questions) self.score = 0 self.current_question = 0 def check_answer(self, question_index, answer) -> bool: if self.answers[question_...
class Quiz: def __init__(self, questions, answers): self.questions = questions self.answers = answers self.total_questions = len(questions) self.score = 0 self.current_question = 0 self.skipped = 0 def check_answer(self, question_index, answer) -> bool: ...
### START TESTS ### if True: # pragma: no cover questions = ["How many days in a week?", "What color absorbs the most light?", "Which language has more native speakers? English or Spanish?", "Who has won the most academy awards?"] answers = ["7", "Black", "Spanish", "Walt Disney"] quiz = ...
Add a new method `skip_question` and a field `skipped` to the Quiz class. This represents a new functionality in the Quiz class that allows users to skip a question, and keep track of how many questions were skipped. Output the number of question skipped as a game statistic in the `display_results` method.
Modify the `Quiz` class to allow the user to skip a question using `self.skip_question()`, and record the number of questions that were skipped in `self.skipped`.
{ "change_kind": "adaptive", "libraries": [], "topic": "Misc" }
18
deck_of_cards
18_deck_of_cards
import random class Card: def __init__(self, suit, value): self.suit = suit self.value = value def __str__(self): return f"{self.value} of {self.suit}" class Deck: def __init__(self): self.cards = [] self.build() def build(self): for suit in ["Spades...
import random class Card: def __init__(self, suit, value): self.suit = suit self.value = value def __str__(self): return f"{self.value} of {self.suit}" class Deck: def __init__(self): self.cards = [] self.build() def build(self): for suit in ["Spades...
### START TESTS ### if True: # pragma: no cover random.seed(42) card = Card("Hearts", "Ace") assert str(card) == "Ace of Hearts" deck = Deck() assert len(deck.cards) == 52 first_card = deck.cards[0] assert str(first_card) == "2 of Spades" deck.shuffle() shuffled_first_card = deck...
Implement the `draw` method in the `Deck` class, and the `receive_card` method in the `Player` class. The `draw` method should remove a card from the front of the deck and return it. It should also return `None` if the deck is empty. The `receive_card` method should take a card as an argument and append it to the end...
Implement the `draw` method in the deck class to draw a card from the front of the deck, and the `receive_card` method in the player class to give a card to the player.
{ "change_kind": "adaptive", "libraries": [], "topic": "Misc" }
19
traffic_analysis
19_traffic_analysis
from typing import Optional, Literal from abc import ABC, abstractmethod class Visitor(ABC): """ A visitor. """ @abstractmethod def visit(self, city_intersection: 'CityIntersection'): """ Visit a city intersection. """ class City: """ A city with a name, populati...
from typing import Optional, Literal from abc import ABC, abstractmethod class Visitor(ABC): """ A visitor. """ @abstractmethod def visit(self, city_intersection: 'CityIntersection'): """ Visit a city intersection. """ class City: """ A city with a name, populati...
### START TESTS ### if True: # pragma: no cover atlanta = City('Atlanta', 500000, 0.5) boston = City('Boston', 200000, 0.3) chicago = City('Chicago', 1000000, 0.7) denver = City('Denver', 300000, 0.4) el_paso = City('El Paso', 100000, 0.1) fargo = City('Fargo', 50000, 0.05) four_way_inters...
Add a new type of intersection called 'Roundabout', and implement the functionality to handle it in the `TrafficAnalysisVisitor` class. The 'Roundabout' intersection should reduce traffic by 30%, therefore make sure that the traffic value is adjusted by 0.7. Also, there is a clear problem in the `visit` method of the ...
Add a new type of intersection, 'Roundabout', which should reduce traffic by 30%. Also, make the visitor actually recur through children intersections too.
{ "change_kind": "adaptive", "libraries": [], "topic": "DSA" }
1
cipher
1_cipher
class Cipher: def __init__(self): self.ciphers = { "default": { 'a': 'b', 'b': 'a', 'c': 'e', 'd': 'd', 'e': 'c', 'f': 'g', 'g': 'f', 'h': 'i', 'i': 'h...
class Cipher: def __init__(self): self.ciphers = { "default": { 'a': 'b', 'b': 'a', 'c': 'e', 'd': 'd', 'e': 'c', 'f': 'g', 'g': 'f', 'h': 'i', 'i': 'h...
### START TESTS ### if True: # pragma: no cover cipher = Cipher() default = cipher.ciphers["default"] assert default['m'] == 'l' assert default['n'] == 'o' assert default['d'] == 'd' assert default['w'] == 'v' assert cipher.translate("default", "willthedogsbark") == "vhmmuicdnfrabsj" ...
Create a new method `caesar_cipher` that takes in an argument `shift`. It should shift every character in `self.alphabet` by the given `shift` amount. For example, if the shift is 4, then the letter `a` would be mapped `e`. This method should append the generated cipher into `self.ciphers` and name it `caesar` followed...
Create a new method `caesar_cipher` that creates a new cipher in `self.ciphers` that shifts every letter by a given amount.
{ "change_kind": "adaptive", "libraries": [], "topic": "DSA" }
20
html_parser
20_html_parser
from typing import List, Union import re class HTMLElement: def __init__(self, name, content: List[Union[str, 'HTMLElement']]): self.name = name self.content = content def __str__(self): return f"<{self.name}>{''.join(str(c) for c in self.content)}</{self.name}>" def __repr__(sel...
from typing import Dict, List, Union import re class HTMLElement: def __init__(self, name, content: List[Union[str, 'HTMLElement']], attributes: Dict[str, str]): self.name = name self.content = content self.attributes = attributes def __str__(self): prelude = f"<{self.name}" ...
### START TESTS ### if True: # pragma: no cover content = "<div>Hello <span>world</span></div>" elements = parse(content) assert "\n".join(str(elem) for elem in elements) == content ex2 = """<head> <title>My awesome page</title> </head> <body> <div> <h1>Super awesome page</h1> <p>This is my awesome pa...
Add support for HTML attributes for the `parse(content: str)` function and `HTMLElement` class. In the `HTMLElement` class add an `attributes` field that is a dictionary of the HTML attributes, and update the `__str__` function to include the attributes in the opening tag. The `parse(content: str)` function should pars...
Add support for HTML attributes to the parser and `HTMLElement` class.
{ "change_kind": "adaptive", "libraries": [], "topic": "Language" }
21
dijkstra_bellman
21_dijkstra_bellman
import heapq class Graph: def __init__(self): self.nodes = set() self.edges = {} def add_node(self, value): self.nodes.add(value) self.edges[value] = [] def add_edge(self, from_node, to_node, weight): self.edges[from_node].append((to_node, weight)) self.ed...
class Graph: def __init__(self): self.nodes = set() self.edges = [] def add_node(self, value): self.nodes.add(value) def add_edge(self, from_node, to_node, weight): self.edges.append((from_node, to_node, weight)) def distances_to(self, start): """ Compu...
### START TESTS ### if True: # pragma: no cover graph1 = Graph() for node in ['A', 'B', 'C', 'D']: graph1.add_node(node) graph1.add_edge('A', 'B', 1) graph1.add_edge('B', 'C', 2) graph1.add_edge('C', 'D', 3) graph1.add_edge('A', 'D', 10) shortest_path1 = graph1.distances_to('A') ...
Add support for negative weights in `distances_to` function, throwing a `ValueError` if there are any negative cycles in the graph. One way to do this, is to use the Bellman-Ford algorithm to find the shortest path from the source to all other nodes. If there are any negative cycles, the algorithm will detect them and...
Make the `distances_to` function support negative weights; but throw a `ValueError` if there are any negative cycles in the graph.
{ "change_kind": "perfective", "libraries": [], "topic": "DSA" }
22
diff_format
22_diff_format
from typing import List def opt(before: str, after: str): before_l = list(enumerate(before.split("\n"))) b = len(before_l) after_l = list(enumerate(after.split("\n"))) a = len(after_l) # OPT[N][M] is best for first n of before and m of after OPT = [[None] * (a + 1) for i in range(b + 1)] ...
from typing import List def opt(before: str, after: str): before_l = list(enumerate(before.split("\n"))) b = len(before_l) after_l = list(enumerate(after.split("\n"))) a = len(after_l) # OPT[N][M] is best for first n of before and m of after OPT = [[None] * (a + 1) for i in range(b + 1)] ...
### START TESTS ### if True: # pragma: no cover b1 = '''bleh bleh''' a1 = '''bob bleh bleh''' b2 = '''hello hello''' a2 = '''hello hey hello''' b3 = '''replacethis hey''' a3 = '''replaced hey''' b4 = '''lots of stuff''' a4 = '''''' b5 = '''only one thing to delete''' a5 = '...
The following code takes a before and after string and creates a relative diff syntax which can edit the before string into the after. It has 3 operations <add>, <del>, and <del><add>. x<add>string adds the given string after the xth line in the before. x<del> deletes the xth line in the before. x<del><add>string repla...
The following code takes a before and after string and creates a relative diff syntax which can edit the before string into the after. It has 3 operations `line`<add>`string`, `line`<del>, and `line`<del><add>`string` which do their operations relative to the lines in the before. Example 1: Before: hey hey After:...
{ "change_kind": "perfective", "libraries": [], "topic": "Language" }
23
bpe_tokenizer
23_bpe_tokenizer
from typing import Dict, List class BPETokenizerTrainer(object): def __init__(self, training_set: str, max_num_merges: int) -> None: self.max_num_merges = max_num_merges self.last_token_id = 0 self.training_set_symbolized: List[str] = [] self.lookup_table: Dict[str, int] = {} ...
from typing import Dict, List class BPETokenizerTrainer(object): def __init__(self, training_set: str, max_num_merges: int, max_num_tokens: int) -> None: self.max_num_merges = max_num_merges self.last_token_id = 0 self.max_num_tokens = max_num_tokens self.training_set_symbolized: ...
### START TESTS ### if True: # pragma: no cover training_set = "Think slow when you write in ink" trainer0 = BPETokenizerTrainer(training_set=training_set, max_num_merges=250, max_num_tokens=100) assert len(trainer0.get_lookup_table()) == 15 assert "in" not in trainer0.get_lookup_table() trainer0....
Add a `max_num_tokens` parameter to the Trainer constructor. `max_num_tokens` should limit the max size of the `lookup_table` on the Trainer. During training, the while loop should terminate early if the `lookup_table` reaches a length of `max_num_tokens`.
Add a `max_num_tokens` parameter to the Trainer which limits the number of tokens that are defined.
{ "change_kind": "perfective", "libraries": [], "topic": "Math" }
24
tree_abstractions
24_tree_abstractions
from abc import abstractmethod class Tree: @abstractmethod def tree_map(self, func): pass @abstractmethod def tree_filter(self, func, filler): pass @abstractmethod def tree_andmap(self, func): pass @abstractmethod def tree_ormap(self, func): pass ...
from abc import abstractmethod class Tree: @abstractmethod def tree_map(self, func): pass @abstractmethod def tree_filter(self, func, filler): pass @abstractmethod def tree_andmap(self, func): pass @abstractmethod def tree_ormap(self, func): pass ...
### START TESTS ### if True: # pragma: no cover add_ten = lambda e : e + 10 is_positive = lambda e : e > 0 contains_x = lambda e : "x" in e count_length = lambda e : len(e) assert Leaf(3).tree_map(add_ten).value == Leaf(13).value assert Leaf(-10).tree_andmap(is_positive) == False assert L...
Change the `tree_map` and `tree_filter` methods in `Tree` and its subclasses to return new objects rather than modifying in place.
Change `Tree` and its subclasses not modify in place and be chainable.
{ "change_kind": "perfective", "libraries": [], "topic": "DSA" }
25
sudoku_solver
25_sudoku_solver
from typing import List, Optional from z3 import ArithRef, Int, Solver, Distinct, And, sat, IntVal def make_9x9_z3_board(board_text: str, solver: Solver) -> List[List[ArithRef]]: """ Creates a board of z3 variables from a string representation of a board. For unknown cells, make the value be 0, and for kn...
from typing import List, Optional from z3 import ArithRef, Int, Solver, Distinct, And, sat, IntVal def make_9x9_z3_board(board_text: str, solver: Solver) -> List[List[ArithRef]]: """ Creates a board of z3 variables from a string representation of a board. For unknown cells, make the value be 0, and for kn...
### START TESTS ### if True: # pragma: no cover def __eval_secret_check_valid(board: List[List[int]]) -> bool: for row in board: if len(set(row)) != 9: return False for col in zip(*board): if len(set(col)) != 9: return False for i in...
This version of the sudoku solver and checker does not reflect the original game of sudoku; the original game also checks for the uniqueness of 3x3 subgrids in addition to the rows and columns. Update the `assert_uniq` function to add new constraints for all nine 3x3 subgrids, and update the `check_valid` function to ...
Make both the sudoku solver and verifier support the nine 3x3 subgrids that are in the original sudoku game.
{ "change_kind": "corrective", "libraries": [ "z3" ], "topic": "DSA" }
26
kl_divergence
26_kl_divergence
import torch def kl_div(q: torch.distributions.Distribution, p: torch.distributions.Distribution) -> torch.Tensor: return torch.distributions.kl_divergence(q, p).mean()
import torch def kl_div(q: torch.distributions.Distribution, p: torch.distributions.Distribution, num_samples: int = 100000) -> torch.Tensor: x = q.sample((num_samples,)) log_q = q.log_prob(x) log_p = p.log_prob(x) kl_div = torch.mean(log_q - log_p) return kl_div
### START TESTS ### if True: # pragma: no cover torch.manual_seed(10) P1 = torch.distributions.Normal(loc=0.0, scale=1.0) Q1 = torch.distributions.Normal(loc=0.1, scale=1.0) assert torch.allclose(torch.distributions.kl_divergence( q=Q1, p=P1), kl_div(q=Q1, p=P1), atol=1e-2) P2 = torch.distribu...
Replace the `kl_div` function body to compute a monte carlo kl divergence approximation by sampling `num_samples` from distribution q. `num_samples` should be a parameter on `kl_div` with a default value of 100000.
Change `kl_div` to compute a monte carlo approximation of the kl divergence given `num_samples` as a parameter, which by default is set to 100000.
{ "change_kind": "perfective", "libraries": [ "torch" ], "topic": "Math" }
28
password_strength_checker
28_password_strength_checker
def minLength(password): assert type(password) == str return len(password) >= 8 def isPasswordStrong(password): return minLength(password)
def minLength(password): assert type(password) == str return len(password) >= 8 def containsSpecialChar(password): specialChar = '`~!@#$%^&*()-_+=[]{}|\\:;<>,.?/\"\'' assert type(password) == str for char in password: if char in specialChar: return True return False def isP...
### START TESTS ### if True: # pragma: no cover assert containsSpecialChar('1243i4u@') == True assert containsSpecialChar('pqighp') == False assert containsSpecialChar('') == False assert containsSpecialChar('!@#$') == True assert isPasswordStrong('ThisPAsswordIsStrong!') == True assert isPass...
Revise the `isPasswordStrong` function to include an additional check that validates the presence of at least one special character within the password. Define a new function named `containsSpecialChar` which iterates over the given password and returns True if any character matches the predefined set of special chara...
Add a function `containsSpecialChar` that checks if a string contains a special character. Update `isPasswordStrong` to check for the presence of a special character in the password.
{ "change_kind": "adaptive", "libraries": [], "topic": "Language" }
29
genetic_algorithm
29_genetic_algorithm
import numpy as np import random import math random.seed(100) class City: def __init__(self, x, y): self.x = x self.y = y def __repr__(self): return f"({self.x}, {self.y})" def __eq__(self, other): if isinstance(other, City): return self.x == other.x and self...
import numpy as np import random import math random.seed(100) class City: def __init__(self, x, y): self.x = x self.y = y def __repr__(self): return f"({self.x}, {self.y})" def __eq__(self, other): if isinstance(other, City): return self.x == other.x and ...
### START TESTS ### if True: # pragma: no cover # checking that nothing that shouldn't change has changed cities = generate_cities(10) assert cities == [City(2, 7), City(7, 2), City(6, 5), City(6, 8), City(1, 8), City(1, 1), City(7, 4), City(0, 10), City(10, 3), City(5, 3)] assert distance(cities[0]...
Edit the genetic algorithm to not generate any routes with repeating cities when calling `next_generation`.
Edit the code to not generate any routes with repeating cities in any generation.
{ "change_kind": "corrective", "libraries": [ "numpy" ], "topic": "DSA" }
30
cross_correlation
30_cross_correlation
import numpy as np def cross_correlation(image, kernel): ih, iw = image.shape kh, kw = kernel.shape oh = ih - kh + 1 ow = iw - kw + 1 output = np.zeros((oh, ow)) for i in range(oh): for j in range(ow): region = image[i:i+kh, j:j+kw] element_wise_product = re...
import numpy as np def cross_correlation(image, kernel, padding): ih, iw = image.shape kh, kw = kernel.shape oh = ih - kh + 1 ow = iw - kw + 1 oh = ih + 2 * padding - kh + 1 ow = iw + 2 * padding - kw + 1 output = np.zeros((oh, ow)) padded = np.pad(image, ((padding, padding), (padd...
### START TESTS ### if True: # pragma: no cover import numpy as np import torch import torch.nn.functional as F im_size, ker_size, padding = 6, 3, 3 im_sizes = [5, 10, 8] ker_sizes = [3, 2, 4] paddings = [0, 2, 3] for im_size, ker_size, pad in zip(im_sizes, ker_sizes, paddings): ...
Change the method `cross_correlation` to also take in an argument `padding`, which pads the image of the method by the number indicated on all sides before performing the cross correlation operation on the padded image.
Change the `cross_correlation` method to take in an argument `padding`, which corresponds to the padding of a cross correlation operation.
{ "change_kind": "perfective", "libraries": [ "numpy" ], "topic": "Math" }
31
bookkeeping
31_bookkeeping
class Yarn: """Represents the yarns that a yarn store sells""" def __init__(self, purchase_price: int, sell_price: int, color: str): self.purchase_price = purchase_price self.sell_price = sell_price self.color = color class BankAccount: """Represents the bank account of this ya...
class Yarn: """Represents the yarns that a yarn store sells""" def __init__(self, purchase_price: int, sell_price: int, color: str): self.purchase_price = purchase_price self.sell_price = sell_price self.color = color class BankAccount: """Represents the bank account of this ya...
### START TESTS ### if True: # pragma: no cover y1 = Yarn(2, 3, "black") y2 = Yarn(4, 9, "yellow") y3 = Yarn(1, 4, "blue") y4 = Yarn(2, 5, "red") y5 = Yarn(3, 3, "white") s = Store(100) # purchase price of this should be 62 stock = { y1: 5, y2: 5, y3: 10, ...
Edit the `buy_yarn` and `sell_yarn` methods in the `Store` class to calculate the price of the order depending on whether its a purchase or a sale, rather than taking in an argument that specifies the total cost of the order.
Edit the `buy_yarn` and `sell_yarn` methods in the `Store` class to calculate the price of the order rather than taking in an argument for it.
{ "change_kind": "adaptive", "libraries": [], "topic": "Misc" }
32
markov_transition
32_markov_transition
import numpy as np class MarkovChain: def create_transition_matrix(self, matrix): matrix = np.array(matrix) column_sums = np.sum(matrix, axis=0) normalized_matrix = matrix / column_sums return normalized_matrix.tolist()
from typing import Dict, List import numpy as np class MarkovChain: def create_transition_matrix(self, matrix): matrix = np.array(matrix) column_sums = np.sum(matrix, axis=0) normalized_matrix = matrix / column_sums return normalized_matrix.tolist() def translate_from_list(s...
### START TESTS ### if True: # pragma: no cover chain = MarkovChain() l1 = { 0: [1, 3], 1: [0, 2], 2: [1, 3], 3: [0, 2, 4], 4: [3] } l2 = { 0: [4], 1: [2, 3, 4], 2: [1, 5, 6], 3: [1, 7, 8, 2], 4: [1, 9, 0, 3], 5: ...
Edit the code to include a method called `translate_from_list(self, adj_list: Dict[int, List[int]]) -> List[List[float]]` that creates the transition matrix that represents the adjacency list, assume all edges are undirected. All columns must sum to 1.
Edit the code to include a method `translate_from_list(self, adj_list)` that creates a transition matrix based on the adjacency list (of type `Dict[int, List[int]]`).
{ "change_kind": "adaptive", "libraries": [ "numpy" ], "topic": "DSA" }
33
genetic_algorithm_2
33_genetic_algorithm_2
import numpy as np import random import math random.seed(100) class City: def __init__(self, x, y): self.x = x self.y = y def __repr__(self): return f"({self.x}, {self.y})" def __eq__(self, other): if isinstance(other, City): return self.x == other.x and ...
import numpy as np import random import math random.seed(100) class City: def __init__(self, x, y): self.x = x self.y = y def __repr__(self): return f"({self.x}, {self.y})" def __eq__(self, other): if isinstance(other, City): return self.x == other.x and ...
### START TESTS ### if True: # pragma: no cover cities = generate_cities(10) assert cities == [City(2, 7), City(7, 2), City(6, 5), City(6, 8), City(1, 8), City(1, 1), City(7, 4), City(0, 10), City(10, 3), City(5, 3)] assert distance(cities[0], cities[1]) == distance(cities[1], cities[0]) assert dista...
Edit the genetic algorithm to guarantee that two random Cities in the list are swapped if the generated number between 0 and 1 is below the stated threshold specified in the `mutation` method.
Edit the genetic algorithm to guarantee mutation if the generated number is below the stated threshhold.
{ "change_kind": "perfective", "libraries": [ "numpy" ], "topic": "DSA" }
34
oop_refactor
34_oop_refactor
def process_message(message, message_type): if message_type == "text": return f"Processed text message: {message}" elif message_type == "image": return f"Processed image message with description: {message}" else: return "Unknown message type"
from abc import ABC, abstractmethod class Message(ABC): """ Abstract class for messages """ def __init__(self, content): self.content = content @abstractmethod def process(self): pass class TextMessage(Message): """ Concrete class for TextMessage """ def pr...
### START TESTS ### if True: # pragma: no cover assert ImageMessage("image").process( ) == "Processed image message with description: image" assert TextMessage("text").process() == "Processed text message: text" assert MessageFactory.get_message( "text", "text").process() == "Processed text mes...
Abstract the code into an object-oriented version of itself. To do that, create an abstract class `Message(ABC)`, which can be initialized with a `content` string. The class should have an abstract method `process(self)`, which should return a string. Create two children classes `TextMessage` and `ImageMessage`, which ...
Make the code object-oriented. Specifically, create an abstract class `Message`, and children classes `TextMessage` and `ImageMessage`. The `Message` class should have a method `process(self)` that returns the message which was given to the constructor. Also, create a `MessageFactory` that has a static method `get_mes...
{ "change_kind": "perfective", "libraries": [], "topic": "Language" }
35
topological_sort
35_topological_sort
from typing import List class Node: '''Simple node (No duplicate edges between nodes)''' def __init__(self, id: int, out_edges: List[int]): uniques = {} for edge in out_edges: if edge in uniques.keys(): raise RuntimeError else: uniques[edg...
from typing import List class Node: '''Simple node (No duplicate edges between nodes)''' def __init__(self, id: int, out_edges: List[int]): uniques = {} for edge in out_edges: if edge in uniques.keys(): raise RuntimeError else: uniques[edg...
### START TESTS ### if True: # pragma: no cover n1 = Node(1, [2]) n2 = Node(2, [3]) n3 = Node(3, [1]) n4 = Node(3, []) n5 = Node(4, [2]) n6 = Node(5, [4, 1]) cyclic = Graph([n1, n2, n3]) dag = Graph([n1, n2, n4, n5, n6]) sorted_dag = dag.topological_sort() n7 = Node(7, [8...
The class `Node` represents a node in a graph with its `id` property being a label and `out_edges` being the ids of all nodes which can be reached in one step from this one. The class `Graph` represents a simple directed graph with its `nodes` property representing all the nodes in the graph. Fix the method `topologic...
Fix the `topological_sort` function in the `Graph` class without changing its signature.
{ "change_kind": "corrective", "libraries": [], "topic": "DSA" }
36
strongly_connected
36_strongly_connected
from typing import List class Node: '''Simple node (No duplicate edges between nodes)''' def __init__(self, id: int): self.id = id self.out_edges = [] self.in_edges = [] def __eq__(self, __value: object) -> bool: if not isinstance(__value, Node): return False ...
from typing import List, Dict class Node: '''Simple node (No duplicate edges between nodes)''' def __init__(self, id: int): self.id = id self.out_edges = [] self.in_edges = [] def __eq__(self, __value: object) -> bool: if not isinstance(__value, Node): return ...
### START TESTS ### if True: # pragma: no cover n1_dup = Node(1) n1 = Node(1) n2 = Node(2) n3 = Node(3) n4 = Node(4) g = Graph([n1, n2, n3, n4]) g.add_edge(n1, n2) g.add_edge(n2, n3) g.add_edge(n3, n1) reversed = g.reverse_edges() scc = g.strongly_connected_components() ...
Add a function `strongly_connected_components(self) -> Dict[Node, int]:` to Graph which divides the graph into disjoint subsets where each node in a subset can be reached from any other node. The union of all subsets should be equivalent to the original graph. Do not change any of the other methods in the classes. The...
Add a function `strongly_connected_components(self) -> Dict[Node, int]:` to Graph which divides the graph into disjoint subsets where each node in a subset can be reached from any other node. Do not change any of the other methods in the classes.
{ "change_kind": "adaptive", "libraries": [], "topic": "DSA" }
37
dijkstras
37_dijkstras
from typing import List class Node: '''Simple node (No duplicate edges between nodes)''' def __init__(self, id: int): self.id = id self.out_edges = [] self.in_edges = [] def __eq__(self, __value: object) -> bool: if not isinstance(__value, Node): return False ...
from typing import List class Node: '''Simple node (No duplicate edges between nodes)''' def __init__(self, id: int): self.id = id self.out_edges = [] self.in_edges = [] def __eq__(self, __value: object) -> bool: if not isinstance(__value, Node): return False ...
### START TESTS ### if True: # pragma: no cover n1 = Node(1) n2 = Node(2) n3 = Node(3) g = Graph([n1, n2, n3]) n4 = Node(4) n5 = Node(5) n6 = Node(6) n7 = Node(7) g2 = Graph([n4, n5, n6]) g.add_edge(Edge(n1, n2, 0)) g.add_edge(Edge(n1, n3, 100)) g.add_edge(Edge(n2, n3,...
Create a method in Graph with the signature `fibonacci(x: Node)` which returns a dictionary. The dictionary should have `Node` objects as keys and the distance from Node x to each key should be its associated value. This should be an int. The dictionary should contain all Nodes which appear in Graph.nodes. If a Node is...
Create a method in Graph with the signature `fibonacci(x: Node)` which returns a dictionary containing which matches `Node` y to the distance from x to y. Distance is defined as smallest path, and path is defined as the sum of the weights of a set of edges which can be taken to get from one node to another. The diction...
{ "change_kind": "adaptive", "libraries": [], "topic": "DSA" }
38
high_order
38_high_order
class Student: def __init__(self, name, gpa) -> None: self.name = name self.gpa = gpa def __eq__(self, __value: object) -> bool: if not isinstance(__value, Student): return False else: return __value.name == self.name class Course: def __init__(self...
import functools import numpy as np class Student: def __init__(self, name, gpa) -> None: self.name = name self.gpa = gpa def __eq__(self, __value: object) -> bool: if not isinstance(__value, Student): return False else: return __value.name == self.name ...
### START TESTS ### #There is no way the model creates this. Special hash: 1k23j4h18o23h1ouiebqdsf1823b1eijqbsd8fub234ir123n49dqhu23124 if True: # pragma: no cover import inspect import sys s1 = Student("A", 0) s2 = Student("B", 1) s3 = Student("C", 2) s4 = Student("D", 0) c1 = Course([s...
Fix the methods in `Course` so that they never throw errors. Even when `len(self.students) == 0`. Instead they should return `None`. Additionally, do not use the words `for`, `while`, or `map` anywhere in the code. You should accomplish this using higher order functions.
Fix the methods in `Course` so that all of them never throw errors and return `None` if the length of their students list is 0. Additionally, do not use the words `for`, `while`, or `map` anywhere in the code.
{ "change_kind": "corrective", "libraries": [ "numpy" ], "topic": "Language" }
39
vowel_count
39_vowel_count
import string def prepare_line(line): for char in string.punctuation: line = line.replace(char, "") for char in string.digits: line = line.replace(char, "") return line def vowel_count(line): vowel_count = 0 for letter in prepare_line(line): if letter in "aeiouy": ...
import string def prepare_line(line): for char in string.punctuation: line = line.replace(char, "") for char in string.digits: line = line.replace(char, "") return line.lower() def remove_diphthongs(line): diphthongs = ["ae", "oe", "ei", "ea", "ia", "io", "aea"] for char in diphtho...
### START TESTS ### if True: # pragma: no cover assert vowel_count('adspirate meis primaque ab origine mundi') == 15 assert vowel_count('dsprt ms prmq b rgn mnd') == 0 assert vowel_count('') == 0 assert vowel_count('In nova fert animus mut@tas dicere 7formas;') == 14 assert vowel_count('in nova fer...
Change vowel_count so that diphthongs are not counted. A diphthong is a string in the list ["ae", "oe", "ei", "ea", "ia", "io", "aea"]. Example 3: vowel_count('adspirate meis primaque ab origine mundi') == 15 Example 4: vowel_count('in nova fert animus mutatas dicere formas') == 15
Change vowel_count() so diphthongs don't count as vowels. A diphthong is "ae", "oe", "ei", "ea", "ia", "io", or "aea".
{ "change_kind": "perfective", "libraries": [], "topic": "Language" }
3
hello_world
3_hello_world
def hello_world(name): return f'{name} says, "Hello World!"'
def hello_world(name): return f'{name.upper()} says, "Hello World!"'
### START TESTS ### if True: # pragma: no cover assert hello_world("The cow") == 'THE COW says, "Hello World!"' assert hello_world("") == ' says, "Hello World!"' assert hello_world("the cow") == 'THE COW says, "Hello World!"' assert hello_world("The Cow") == 'THE COW says, "Hello World!"' assert he...
The function hello_world should return the string parameter "name" converted to uppercase concatenated to the string ' says, "Hello World!"'. For example, hello_world('the cow') should return 'THE COW says, "Hello World!"'. For another example, hello_world('joe') should return 'JOE says, "Hello World!"'.
Make the name fully uppercase.
{ "change_kind": "perfective", "libraries": [], "topic": "Language" }
40
adjacency
40_adjacency
from typing import List class Node: '''Simple node (No duplicate edges between nodes)''' def __init__(self, id: int): self.id = id self.out_edges = [] self.in_edges = [] def __eq__(self, __value: object) -> bool: if not isinstance(__value, Node): return False ...
from typing import List, Dict class Node: '''Simple node (No duplicate edges between nodes)''' def __init__(self, id: int): self.id = id self.out_edges = [] self.in_edges = [] def __eq__(self, __value: object) -> bool: if not isinstance(__value, Node): return ...
### START TESTS ### if True: # pragma: no cover n1_dup = Node(1) n1 = Node(1) n2 = Node(2) n3 = Node(3) n4 = Node(4) g = Graph([n1, n2, n3, n4]) g.add_edge(n1, n2) g.add_edge(n2, n3) g.add_edge(n3, n1) reversed = g.reverse_edges() adjacencies = g.adjacency_list() ass...
Add a function `adjacency_list(self) -> Dict[Node, List[Node]]` which returns the adjacency list of the graph by returning a dictionary where the keys are `Node` and the values are a list of `Node` which represent the nodes which can be reached from this one in one step. The output dictionary should contain all nodes i...
Add a function `adjacency_list(self) -> Dict[Node, List[Node]]` which returns the adjacency list of the graph by returning a dictionary which associates a `Node` to its list of out edges.
{ "change_kind": "adaptive", "libraries": [], "topic": "DSA" }
41
group_theory
41_group_theory
import torch import numpy as np import torch.nn as nn class C4(nn.Module): """Represents the C4 class of group theory, where each element represents a discrete rotation.""" def __init__(self): super().__init__() self.register_buffer('identity', torch.Tensor([0.])) def size(self): ...
import torch import numpy as np import torch.nn as nn class C8(nn.Module): """Represents the C8 class of group theory, where each element represents a discrete rotation.""" def __init__(self): super().__init__() self.register_buffer('identity', torch.Tensor([0.])) def size(self): ...
### START TESTS ### if True: # pragma: no cover group = C8() delta = np.pi / 4 elements = group.elements() assert group.size() == 8 assert torch.allclose(group.elements(), torch.tensor([0., delta, delta * 2, delta * 3, delta * 4, delta * 5, delta * 6, delta * 7])) assert torch.allclose(gr...
Edit the C4 class, which represents rotations of 0, 90, 180 and 270 degrees, to represent the class C8, which represents rotations of 0, 45, 90, 135, 180, 225, 270 and 315 degrees.
Edit the C4 class and its methods to represent the C8 group instead.
{ "change_kind": "perfective", "libraries": [ "torch", "numpy" ], "topic": "Math" }
44
html_to_markdown
44_html_to_markdown
from typing import Dict, List, Union import re class HTMLElement: def __init__(self, name, content: List[Union[str, 'HTMLElement']], attributes: Dict[str, str]): self.name = name self.content = content self.attributes = attributes def __str__(self): prelude = f"<{self.name}" ...
from typing import Dict, List, Union import re class HTMLElement: def __init__(self, name, content: List[Union[str, 'HTMLElement']], attributes: Dict[str, str]): self.name = name self.content = content self.attributes = attributes def __str__(self): prelude = f"<{self.name}" ...
### START TESTS ### if True: # pragma: no cover content = "<div>Hello <span>world</span></div>" elements = parse(content) assert "\n".join(str(elem) for elem in elements) == content ex2 = """<head> <title>My awesome page</title> </head> <body> <div> <h1>Super awesome page</h1> <p>This is my awesome pa...
Add two more cases for ordered ("ol") and unordered ("ul") lists. If either list (ordered or unordered) contains more than 5 items, display the first 5 items, then add a 6th element that is a link with a text display of "see more" and an href of "/see-more". The 6th element should be in place for the rest of the items ...
Add support for ordered and unordered lists. If either list contains more than 5 items, truncate and add a 6th element that is a link with a text display of "see more" and an href of "/see-more".
{ "change_kind": "perfective", "libraries": [], "topic": "Language" }
45
double_consonant
45_double_consonant
import string def prepare_string(line): for char in string.punctuation: line = line.replace(char, "") for char in string.digits: line = line.replace(char, "") return line.lower() def double_consonant(substring): consonant_streak = 0 consonant_count = 0 consonants = "qwrtypsdfgh...
import string def prepare_string(line): for char in string.punctuation: line = line.replace(char, "") for char in string.digits: line = line.replace(char, "") return line.lower() def double_consonant(substring): consonant_streak = 0 consonant_count = 0 consonants = "qwrtypsdfgh...
### START TESTS ### if True: # pragma: no cover assert double_consonant('th') == False assert double_consonant('ch') == False assert double_consonant('ll') == False assert double_consonant('gh') == True assert double_consonant('lt') == True assert double_consonant('ta') == False assert doub...
Modify double_consonant so that if substring is "th", "ch", or "ll" double_consonant returns False.
Modify double_consonant so that "th", "ch", and "ll" don't count as double consonants.
{ "change_kind": "perfective", "libraries": [], "topic": "Language" }
46
consonants_within
46_consonants_within
import string def prepare_string(line): for char in string.punctuation: line = line.replace(char, "") for char in string.digits: line = line.replace(char, "") return line.lower() def consonant_within(line): consonants = "qwrtypsdfghjklzcmnvbx" word_con_count = 0 total_con_count...
import string def prepare_string(line): for char in string.punctuation: line = line.replace(char, "") for char in string.digits: line = line.replace(char, "") return line.lower() def consonant_within(line): consonants = "qwrtypsdfghjklzcmnvb" word_con_count = 0 total_con_count ...
### START TESTS ### if True: # pragma: no cover assert consonant_within('quem dixere chaos: rudis indigestaque moles') == 4 assert consonant_within('sic erat instabilis tellus innabilis unda') == 4 assert consonant_within('in nova fert animus mutatas dicere formas') == 2
Modify consonant_within so that word_con_count increases by 2 for every 'x' in word.
Modify consonant_within so that 'x' counts as 2 consonants.
{ "change_kind": "perfective", "libraries": [], "topic": "Language" }
47
merge_sort
47_merge_sort
from typing import List def merge_sort(lst: List[int]) -> List[int]: if len(lst) > 1: mid = len(lst) // 2 L = lst[:mid] R = lst[mid:] merge_sort(L) merge_sort(R) i = j = k = 0 while i < len(L) and j < len(R): if L[i] < R[j]: lst[k]...
from typing import List def merge_sort(lst: List[int]) -> List[int]: def merge(left, right): if left[-1] <= right[0]: return left + right result = [] i = j = 0 while i < len(left) and j < len(right): if left[i] < right[j]: result.append(left[...
### START TESTS ### if True: # pragma: no cover import timeit from typing import Callable, List assert merge_sort([]) == [] assert merge_sort([1]) == [1] assert merge_sort([12, 11, 13, 5, 6, 7]) == [5, 6, 7, 11, 12, 13] assert merge_sort([1, 2, 3, 4, 5, 0, 2, 4, 6]) == [ 0, 1, 2, 2, 3,...
Implement an optimization for the Merge Sort algorithm that handles cases where the array is already partially sorted. This optimization should minimize the number of comparisons and copies in scenarios where the array has large sorted subsequences. To do this, add an early termination condition that checks if the sub-...
Implement an optimization for the Merge Sort algorithm that handles cases where the array is already partially sorted. This optimization should minimize the number of comparisons and copies in scenarios where the array has large sorted subsequences.
{ "change_kind": "perfective", "libraries": [], "topic": "DSA" }
48
max_sum_subarray
48_max_sum_subarray
from typing import List def max_sublstay_sum(lst: List[int]) -> int: max_so_far = lst[0] curr_max = lst[0] for i in range(1, len(lst)): curr_max = max(lst[i], curr_max + lst[i]) max_so_far = max(max_so_far, curr_max) return max_so_far
from typing import Tuple, List def max_sublstay_sum(lst: List[int]) -> Tuple[int, int, int]: max_so_far = lst[0] curr_max = lst[0] start = end = s = 0 for i in range(1, len(lst)): if lst[i] > curr_max + lst[i]: curr_max = lst[i] s = i else: curr_max +...
### START TESTS ### if True: # pragma: no cover assert max_sublstay_sum([-2, -3, 4, -1, -2, 1, 5, -3]) == (7, 2, 6) assert max_sublstay_sum([-2, -3, -4, -1, -2, -1, -5, -3]) == (-1, 3, 3) assert max_sublstay_sum([1, 2, 3, 4, 5]) == (15, 0, 4) assert max_sublstay_sum([4]) == (4, 0, 0) assert max_sub...
Adapt the function to return the indices of the subarray by returning a tuple of (sum, srt_idx, end_idx). The implementation should track the start index.
Adapt the function to return the indices of the subarray by returning a tuple of (sum, srt_idx, end_idx).
{ "change_kind": "perfective", "libraries": [], "topic": "DSA" }
49
binary_search
49_binary_search
from typing import List def binary_search(lst: List[int], x: int) -> int: low = 0 high = len(lst) - 1 mid = 0 while low <= high: mid = (high + low) // 2 if lst[mid] < x: low = mid + 1 elif lst[mid] > x: high = mid - 1 else: return mid...
from typing import List def binary_search(lst: List[int], x: int) -> int: low = 0 high = len(lst) - 1 result = -1 while low <= high: mid = (high + low) // 2 if lst[mid] < x: low = mid + 1 elif lst[mid] > x: high = mid - 1 else: result...
### START TESTS ### if True: # pragma: no cover assert binary_search([1, 2, 3, 4, 5], 3) == 2 assert binary_search([1, 2, 3, 4, 5], 6) == -1 assert binary_search([1, 2, 3, 3, 4], 3) == 2 assert binary_search([1], 1) == 0 assert binary_search([1], 0) == -1 assert binary_search([], 1) == -1 a...
Adapt the function to handle multiple occurrences of the query item by returning the index of the first occurrence.
Adapt to return the first occurrence of the query item.
{ "change_kind": "perfective", "libraries": [], "topic": "DSA" }
4
tensor_operations
4_tensor_operations
class Tensor: def __init__(self, matrix): self.matrix = matrix def m(self): return len(self.matrix) def n(self): return len(self.matrix[0]) def relu(self): for i in range(self.m()): for j in range(self.n()): self.matrix[i][j] = ...
class Tensor: def __init__(self, matrix): self.matrix = matrix def m(self): return len(self.matrix) def n(self): return len(self.matrix[0]) def relu(self): for i in range(self.m()): for j in range(self.n()): self.matrix[i][j] = ...
### START TESTS ### if True: # pragma: no cover m1 = [[9, -2, 6, 13, -8], [17, -22, 4, 11, 19], [ 5, 12, -25, 3, -16], [-10, 18, 7, -20, 14], [23, -15, 21, 24, -1]] m2 = [[3, -5, 7, -2, 4, -8, 6, 1, -9], [ 10, -1, 2, -6, 9, -4, 8, -7, 5], [ -2, 7, -4, 8, -3...
Change `flatten` in the Tensor class to flatten the tensor in `self.matrix` from left to right, row by row.
Change `flatten` to flatten lists left to right, top down.
{ "change_kind": "perfective", "libraries": [], "topic": "Math" }
50
syllable_count
50_syllable_count
import string def prepare_string(line): for char in string.punctuation: line = line.replace(char, "") for char in string.digits: line = line.replace(char, "") return line.lower() def vowel_count(line): vowel_count = 0 for c in line: if c in "aeiouy": vowel_count...
import string def prepare_string(line): for char in string.punctuation: line = line.replace(char, "") for char in string.digits: line = line.replace(char, "") return line.lower() def vowel_count(line): vowel_count = 0 for c in line: if c in "aeiouy": vowel_count...
### START TESTS ### if True: # pragma: no cover assert syllable_count('italiam fato profugus laviniaque venit') == 17 assert syllable_count('ante mare et terras et quod tegit omnia caelum') == 17 assert syllable_count('repostum iudicium') == 7 assert syllable_count('mollia cum duris sine pondere habent...
Modify the function syllable_count so the variable syllable_count increases by the number of 'combo' in line. A 'combo' is: a vowel at the end of a word followed by a vowel at the beginning of the next word, a vowel followed by ‘m’ at the end of a word followed by a vowel at the beginning of the next word, a vowel f...
Modify the function syllable_count so each 'combo' in line is counted as 1 syllable. A 'combo' is: a vowel at the end of a word followed by a vowel at the beginning of the next word, a vowel followed by ‘m’ at the end of a word followed by a vowel at the beginning of the next word, a vowel followed by ‘h’ at the end...
{ "change_kind": "adaptive", "libraries": [], "topic": "Language" }
51
managers_manager
51_managers_manager
from typing import List, Union class Manager: def __init__(self, name: str, direct_reports: List[Union["Manager", "IC"]]): self.name = name self.team = direct_reports def find_managers_manager(self, name: str) -> List[str]: all_managers_managers_names = [] for direct_report...
from typing import List, Union class Manager: def __init__(self, name: str, direct_reports: List[Union["Manager", "IC"]]): self.name = name self.team = direct_reports def find_manager_n(self, name: str, n: int) -> List[str]: assert n > 0 all_manager_n_names = [] for...
### START TESTS ### if True: # pragma: no cover """ CEO Manager3 Manager2 Manager1 IC (Alice) IC (Bob) IC (David) IC (Alice) Manager4 IC (Eva) IC (Frank) ...
Change the `find_managers_manager` method to `find_manager_n` which takes in a `name` and `n`, which is the number of managers (in depth) away from the given name to search for. `n` must be at least 1. To do this change, update the path index.
Change the `find_managers_manager` method to `find_manager_n` which takes in a `name` and `n`, which is the number of managers (in depth) away from the given name to search for.
{ "change_kind": "perfective", "libraries": [], "topic": "DSA" }
52
magic_square
52_magic_square
from z3 import Sum, Distinct, Solver, Int, And, sat from typing import List, Union def magic_square() -> Union[str, List[List[int]]]: y = [[Int(f'x_{i}_{j}') for j in range(3)] for i in range(3)] s = Solver() s.add([And(x > 0, x <= 9) for row in y for x in row]) s.add(Distinct([x for row in y for x in ...
from z3 import Sum, Distinct, Solver, Int, And, sat from typing import List, Union def magic_square(order: int) -> Union[str, List[List[int]]]: y = [[Int(f'x_{i}_{j}') for j in range(order)] for i in range(order)] s = Solver() s.add([And(x > 0, x <= order*order) for row in y for x in row]) s.add(Disti...
### START TESTS ### if True: # pragma: no cover from typing import List def is_valid_magic_square(soln: List[List[int]], order: int) -> bool: magic_const = order * (order**2 + 1) // 2 for row in soln: if sum(row) != magic_const: return False for col in range...
Add an `order` parameter to the magic square solver that can dynamically set the side length of the square. Make the necessary changes to the value range, diagonal sum, and row and column sums.
Add an `order` parameter to the magic square solver that can dynamically set the side length of the square.
{ "change_kind": "perfective", "libraries": [ "z3" ], "topic": "DSA" }
53
minimax_to_alphabeta
53_minimax_to_alphabeta
import copy from typing import List, Literal, Optional, Tuple Player = Literal['X', 'O'] WinStatus = Literal[Player, 'TIE', None] class ConnectNGame: """ A game of Connect N, of width x height, where N is the number of pieces in a row/column/diagonal to win. """ def __init__(self, width, height, n):...
import copy from typing import List, Literal, Optional, Tuple Player = Literal['X', 'O'] WinStatus = Literal[Player, 'TIE', None] class ConnectNGame: """ A game of Connect N, of width x height, where N is the number of pieces in a row/column/diagonal to win. """ def __init__(self, width, height, n):...
### START TESTS ### if True: # pragma: no cover game1 = ConnectNGame(7, 6, 4) assert game1.drop(0, 'X') assert game1.drop(0, 'O') assert game1.drop(0, 'X') assert game1.drop(0, 'O') assert game1.drop(0, 'X') assert game1.drop(0, 'O') assert not game1.drop(0, 'X') assert not game1.is...
Augment the minimax algorithm with alpha-beta pruning to make it faster. Keep track of an alpha and beta value, which represent the minimum score that the maximizing player is assured of and the maximum score that the minimizing player is assured of respectively. Utilize these two scores to prune branches of the searc...
Optimize the AI to find the best move in less steps.
{ "change_kind": "perfective", "libraries": [], "topic": "DSA" }
55
bm25
55_bm25
import math from typing import List, Dict class BM25: def __init__(self, corpus: List[List[str]], k1: float = 1.5, b: float = 0.75) -> None: self.corpus = corpus self.corpus_size = len(corpus) self.avgdl = sum(len(doc) for doc in corpus) / self.corpus_size self.k1 = k1 self....
import math from typing import List, Dict class BM25: def __init__(self, corpus: List[List[str]], k1: float = 1.5, b: float = 0.75) -> None: self.corpus = corpus self.corpus_size = len(corpus) self.avgdl = sum(len(doc) for doc in corpus) / self.corpus_size self.k1 = k1 self....
### START TESTS ### if True: # pragma: no cover import timeit from typing import List, Dict import math class BM25Slow: def __init__(self, corpus: List[List[str]], k1: float = 1.5, b: float = 0.75) -> None: self.corpus = corpus self.corpus_size = len(corpus) ...
Move as many frequency calculations to the constructor as possible to avoid duplicate calculations over the same corpus. The algorithm itself should remain semantically identical.
Optimize the bm25 algorithm by avoiding frequency calculations.
{ "change_kind": "perfective", "libraries": [], "topic": "DSA" }
56
interference_vars
56_interference_vars
from abc import ABC, abstractmethod from typing import Dict, Literal, Set # A-Normal Form (ANF) is a way of writing programs where every subexpression is # a variable or a function call. This is useful for compilers because it makes # it easier to reason about the program and to perform optimizations. # the kind of ...
from abc import ABC, abstractmethod from typing import Dict, Literal, Set # A-Normal Form (ANF) is a way of writing programs where every subexpression is # a variable or a function call. This is useful for compilers because it makes # it easier to reason about the program and to perform optimizations. # the kind of ...
### START TESTS ### if True: # pragma: no cover n = ALet("n", value=CImmExpr(ImmExpr(1, "int")), body=ALet("f", value=CPrim("+", ImmExpr(1, "int"), ImmExpr("n", "id")), body=ACExpr(CImmExpr(ImmExpr("f", "id"))))) assert n.interfere(set(), ...
Create a new class `ASeq`, inheriting from `AExpr`. This is a new kind of expression, which is a sequence of two `CExpr`s. This class should implement both the `free_vars` and `interfere` methods, and should be constructed with two `CExpr`s. The `free_vars` method should return the union of the free variables of the tw...
Create a new expression kind `ASeq`, which is a sequence of two cexprs.
{ "change_kind": "adaptive", "libraries": [], "topic": "Language" }
57
string_formatter
57_string_formatter
def format_string(name1, name2, message): formattedString = f'Hello, {name1.lower().capitalize()}! You have a message from {name2.lower().capitalize()}. The message is: {message}' return formattedString
def concatenate_nums(message): subject = message.split(' ')[0] verb = message.split(' ')[1] obj = message.split(' ')[2] return f'{obj} {verb} {subject}' def format_string(name1, name2, message): formattedString = f'Hello, {name1.lower().capitalize()}! You have a message from {name2.lower().cap...
### START TESTS ### if True: # pragma: no cover assert concatenate_nums("the cat chased the mouse") == "the mouse chased the cat" assert concatenate_nums('Bob says "hi"') == '"hi" says Bob' assert format_string('Bob', 'Suzy', 'the cat chased the mouse') == 'Hello, Bob! You have a message from Suz...
Change the function format_string so that the word order of the string message is changed from subject-verb-object to object-verb-subject. Do this by writing a helper function called concatenate_nums that takes in message and returns message in object-verb-subject word order. Assume that message is originally in subjec...
change format_string so the word order of message is changed from SVO to OVS. Do this by writing a function called concatenate_nums that takes in message and returns message in OVS. Assume that message is composed only of the subject, object, and verb and that the subject, object, and verb are separated by " ".
{ "change_kind": "perfective", "libraries": [], "topic": "Language" }
58
dependency_solver
58_dependency_solver
from typing import List, Literal class Semver: def __init__(self, major: int, minor: int, patch: int): self.major = major self.minor = minor self.patch = patch def __str__(self): return f'{self.major}.{self.minor}.{self.patch}' def __eq__(self, other): return self...
from typing import List, Literal class Semver: def __init__(self, major: int, minor: int, patch: int): self.major = major self.minor = minor self.patch = patch def __str__(self): return f'{self.major}.{self.minor}.{self.patch}' def __eq__(self, other): return self...
### START TESTS ### if True: # pragma: no cover # foo has no dependencies foo = Package( "foo", [ PackageVersion(Semver(0, 0, 1)), PackageVersion(Semver(1, 0, 0)), PackageVersion(Semver(1, 1, 0)), PackageVersion(Semver(1, 2, 3)), Packa...
Add assertions in the `PackageVersion` constructor to ensure that there are no duplicate dependencies with the same name. Additionally, add assertions in the `Package` constructor to ensure that there are no versions with the same version number.
Make sure that there are no duplicate versions and duplicate dependencies when creating a `Package` or `PackageVersion`.
{ "change_kind": "perfective", "libraries": [], "topic": "DSA" }
60
unique_number
60_unique_number
from typing import List def find_non_pair(numbers: List[int]) -> int: count = {} for number in numbers: count[number] = count.get(number, 0) + 1 for number, occurrence in count.items(): if occurrence != 2: return number return 0
from typing import List def find_non_pair(numbers: List[int]) -> int: s = 0 for number in numbers: s ^= number return s
### START TESTS ### if True: # pragma: no cover import timeit import random def find_non_pair_slow(numbers: List[int]) -> int: count = {} for number in numbers: count[number] = count.get(number, 0) + 1 for number, occurrence in count.items(): if occurrence !...
Change the implementation such that `find_non_pair` only loops over the list once and uses constant memory. To do this, you can use the XOR operator to check for the unique number as two numbers XORed == 0.
Change the implementation such that `find_non_pair` only loops over the list once and uses constant memory.
{ "change_kind": "perfective", "libraries": [], "topic": "DSA" }
6
locked_box
6_locked_box
from typing import Optional class MyBox: def __init__(self, data: str): self.data = data def lock(self, pin: int) -> 'LockedMyBox': return LockedMyBox(self.data, pin) def duplicate(self) -> 'MyBox': return MyBox(self.data) class LockedMyBox(MyBox): def __init__(self, data: s...
from typing import Optional class MyBox: def __init__(self, data: str): self.data = data def lock(self, pin: int) -> 'LockedMyBox': return LockedMyBox(self.data, pin) def peek(self) -> str: return self.data class LockedMyBox(MyBox): def __init__(self, data: str, pin: int): ...
### START TESTS ### if True: # pragma: no cover box = MyBox("test data") assert box.peek() == "test data", "Failed to initialize MyBox with data." box = MyBox("peek test") assert box.peek() == "peek test", "Failed to peek into MyBox." box = MyBox("lock test") locked_box = box.lock(1234) ...
Apply the following two changes to both the `LockedMyBox` and `MyBox` classes: 1. Remove the `duplicate()` method, as it is no longer needed. 2. Add a new method `peek()` on both classes, which retrieves the contents inside the box. In the case of `LockedMyBox`, this method should throw an exception.
Remove the `duplicate` methods and add a new `peek` method to see the data inside the box. If the box is locked, `peek` should throw an error.
{ "change_kind": "adaptive", "libraries": [], "topic": "Misc" }
7
temperature_converter
7_temperature_converter
def fahrenheit_to_celsius(temperature): return ((temperature - 32)*5)/9
def fahrenheit_to_celsius(temperature): return ((temperature - 32)*5)/9 def celsius_to_fahrenheit(temperature): return ((temperature*9)/5) + 32
### START TESTS ### if True: # pragma: no cover assert celsius_to_fahrenheit(0) == 32 assert celsius_to_fahrenheit(100) == 212 assert celsius_to_fahrenheit(37.3) == 99.14 assert round(celsius_to_fahrenheit(-273.15), 2) == -459.67 assert fahrenheit_to_celsius(32) == 0 assert fahrenheit_to_celsiu...
Add a function called 'celsius_to_fahrenheit' that has the parameter temperature, an integer or float, and returns ((temperature*9)/5) + 32.
add a function `celsius_to_fahrenheit`
{ "change_kind": "adaptive", "libraries": [], "topic": "Math" }
8
vector_lib
8_vector_lib
from abc import ABC, abstractmethod class Vector(ABC): def __init__(self, *args: int): self.vals = args @abstractmethod def manhattan_distance(other) -> float: pass @abstractmethod def cosine_similarity(other) -> float: pass
from abc import ABC, abstractmethod import math class Vector(ABC): def __init__(self, *args: int): self.vals = args @abstractmethod def manhattan_distance(self, other) -> float: pass @abstractmethod def cosine_similarity(self, other) -> float: pass class MyVector(Vector):...
### START TESTS ### if True: # pragma: no cover m = MyVector(0, 0, 0) one = MyVector(1, 1, 1) v2 = MyVector(1, 1) v3 = MyVector(1, 0) v4 = MyVector(0, 1) v5 = MyVector(-1, 0) try: v2.cosine_similarity(m) assert False except: assert True try: v2.cosin...
Create a class called `MyVector` which extends the `Vector` class with the abstract methods implemented. `manhattan_distance(other: Vector)` should return the sum of the absolute difference difference between each element of `self.vals` and `other.vals` as a `float`. `cosine_similarity` should return the angle between...
Create an implementation of the `Vector` class called `MyVector` with the abstract methods implemented. `manhattan_distance` should return the sum of the absolute difference difference between each element of `self.vals` and `other.vals`. `cosine_similarity` should return the angle between both vectors
{ "change_kind": "adaptive", "libraries": [], "topic": "Math" }
9
sorting
9_sorting
class Sorter: def __init__(self): pass def sort(self, nums: list[int]) -> list[int]: if len(nums) == 0: return nums else: return self.insert(self.sort(nums[1:]), nums[0]) def insert(self, nums: list[int], num: int) -> list[int]: output = [] ...
class Sorter: def __init__(self): pass def sort(self, nums: list[int]): output = self.sort_help(nums) for i, n in enumerate(output): nums[i] = n def sort_help(self, nums: list[int]) -> list[int]: if len(nums) == 0: return nums else: ...
### START TESTS ### if True: # pragma: no cover s = Sorter() empty = [] ones = [1, 1] one_three_two = [1, 3, 2] sorted = [1, 2, 3] s.sort(empty) s.sort(ones) s.sort(one_three_two) s.sort(sorted) assert len(empty) == 0 assert len(ones) == 2 assert len(one_three_two) == ...
change the methods of the Sorter class in any way so that the `sort` method does its sorting in place and has the signature `sort(nums: list[int])` only the `sort` method needs to work in place, the others can work in whichever way is best.
Change the following functions so that `sort` sorts the given list inplace.
{ "change_kind": "perfective", "libraries": [], "topic": "DSA" }
59
standard_scaling
59_standard_scaling
import pandas as pd from sklearn.preprocessing import StandardScaler def standardize_data(data, scaler): """Standardizes the numeric columns in the data""" numeric = data.select_dtypes(include=['float64']).columns data_copy = data.copy() data_copy[numeric] = scaler.fit_transform(data[numeric]) ret...
import pandas as pd from sklearn.preprocessing import StandardScaler def standardize_data(data, scaler, fit): """Standardizes the numeric columns in the data""" numeric = data.select_dtypes(include=['float64']).columns data_copy = data.copy() if fit: data_copy[numeric] = scaler.fit_transform(d...
### START TESTS ### if True: # pragma: no cover data = { 'Location': ['Location 1', 'Location 2', 'Location 3', 'Location 4', 'Location 5', 'Location 6', 'Location 7', 'Location 8', 'Location 9', 'Location 10'], 'Bedrooms': [3.0, 4.0, 2.0, 5.0, 3.0, 4.0, 2.0, 3.0, 4.0, 3.0], ...
Edit the functions 'standardize_data()` and `build()` to standardize both positve and negative dataset the same way, by transforming the second dataset with the same function as the first.
Edit the code such that both datasets used in the `build()` function are standardized the same way.
{ "change_kind": "perfective", "libraries": [ "pandas", "scikit-learn" ], "topic": "Data Science" }
61
ridge_regression
61_ridge_regression
from sklearn.linear_model import LinearRegression from sklearn.preprocessing import MinMaxScaler def normalize_data(data, scaler): """Normalizes the columns with float values""" numeric = data.select_dtypes(include=['float64']).columns data_copy = data.copy() data_copy[numeric] = scaler.fit_transform(d...
from sklearn.linear_model import RidgeCV from sklearn.preprocessing import MinMaxScaler import numpy as np def normalize_data(data, scaler): """Normalizes the columns with float values""" numeric = data.select_dtypes(include=['float64']).columns data_copy = data.copy() data_copy[numeric] = scaler.fit_t...
### START TESTS ### if True: # pragma: no cover try: import pandas as pd import numpy as np except: # fine pass house_data = { 'Location': ['Location 1', 'Location 2', 'Location 3', 'Location 4', 'Location 5', 'Location 6', 'Location 7', 'Locati...
Modify the model to be a ridge regression model, which automatically tunes for the optimal alpha value between 1 to 2, inclusive on both ends, in increments of 0.01.
Modify the current model to use L2 regularization, and tune the alpha value between 1 to 2, inclusive on both ends, in increments of 0.01.
{ "change_kind": "perfective", "libraries": [ "numpy", "scikit-learn" ], "topic": "Data Science" }
65
tournament_tree
65_tournament_tree
from typing import Optional, Union class Player: """ A player and its rating; the rating is always a positive integer (>= 0). """ def __init__(self, name, rating): self.name = name assert isinstance(rating, int) and rating >= 0 self.rating = rating class TournamentTreeNode: ...
from typing import Optional, Union class Player: """ A player and its rating; the rating is always a positive integer (>= 0). """ def __init__(self, name, rating): self.name = name assert isinstance(rating, int) and rating >= 0 self.rating = rating def against(self, other...
### START TESTS ### if True: # pragma: no cover p1 = Player("p1", 100) p2 = Player("p2", 120) p3 = Player("p3", 130) p4 = Player("p4", 150) p5 = Player("p5", 130) p6 = Player("p6", 200) p7 = Player("p7", 190) p8 = Player("p8", 140) n1 = TournamentTreeNode(p1, p2) n2 = Tournamen...
Refactor the code to add a `against(self, other: 'Player') -> 'Player'` method to the Player class, which returns the player who wins the game between `self` and `other`; this is based on the logic present in the `who_won` method, which should be removed and a call to `against` should be made instead.
Refactor the code to add a `against(self, other: 'Player') -> 'Player'` method to the Player class and move the logic from the `who_won` method into this new method.
{ "change_kind": "perfective", "libraries": [], "topic": "DSA" }
63
knary_trees
63_knary_trees
from abc import ABC, abstractmethod class KNaryTree(ABC): """Represents the abstract idea of a tree with an arbitrary number of children at each level""" @abstractmethod def total(self): """Returns the sum of all values in this KNaryTree""" pass @abstractmethod def depth(self): ...
from abc import ABC, abstractmethod class KNaryTree(ABC): """Represents the abstract idea of a tree with an arbitrary number of children at each level""" @abstractmethod def total(self): """Returns the sum of all values in this KNaryTree""" pass @abstractmethod def depth(self): ...
### START TESTS ### a = Leaf(8) b = Leaf(16) c = Leaf(2) d = Leaf(1) e = Leaf(10) f = Leaf(6) g = Node(11, [b]) h = Node(3, [c, d, e]) i = Node(5, [g]) j = Node(7, [a, i, h, f]) assert a.total() == 8 assert b.total() == 16 assert c.total() == 2 assert d.total() == 1 assert e.total() == 10 assert f.total() == 6 asser...
Add a method `count_leaves` that recursively counts the number of leaf nodes in the given KNaryTree.
Add a method `count_leaves` that counts the number of leaf nodes in a given KNaryTree.
{ "change_kind": "adaptive", "libraries": [], "topic": "DSA" }
66
product_analysis
66_product_analysis
import pandas as pd from io import StringIO # data data = """ date,product_id,country,sales_channel,units_sold,unit_price,customer_age,customer_gender 2024-01-01,P1001,USA,Online,120,15.99,30,Female 2024-01-01,P2002,UK,In-store,75,45.50,45,Male 2024-01-02,P1001,Canada,Online,90,15.99,24,Female 2024-01-02,P3003,Germany...
import pandas as pd from io import StringIO # data data = """ date,product_id,country,sales_channel,units_sold,unit_price,customer_age,customer_gender 2024-01-01,P1001,USA,Online,120,15.99,30,Female 2024-01-01,P2002,UK,In-store,75,45.50,45,Male 2024-01-02,P1001,Canada,Online,90,15.99,24,Female 2024-01-02,P3003,Germany...
### START TESTS ### if True: # pragma: no cover assert run_analysis() == 34
Return the number of units sold to a female with the unit price closest to the average_price. To do this, filter for the units sold to females, then take the number of units sold in the order with the closest absolute difference between the average price and unit price.
Return the number of units sold to a female with the unit price closest to the average_price.
{ "change_kind": "perfective", "libraries": [ "pandas" ], "topic": "Data Science" }
68
prime_numbers_problem
68_prime_numbers_problem
from typing import List def sum_of_prime_products(n: int) -> int: """ Let P be the set of the first 15 prime numbers. Find the sum of all distinct products that can be formed by multiplying any two different primes in P. """ def is_prime(n: int) -> bool: if n <= 1: return False ...
from typing import List from itertools import combinations def sum_of_prime_products_in_range(start: int, end: int) -> int: """ Find the sum of all distinct products that can be formed by multiplying any three different prime numbers within the range from 'start' to 'end'. """ def is_prime(num: int...
### START TESTS ### if True: # pragma: no cover assert sum_of_prime_products_in_range(10, 20) == 12900 assert sum_of_prime_products_in_range(10, 100) == 156402490 assert sum_of_prime_products_in_range(1, 3) == 0 assert sum_of_prime_products_in_range(50, 10) == 0 assert sum_of_prime_products_in_rang...
Change the function name to `sum_of_prime_products_in_range` with `start` and `end` as the parameters. It should consider the range that is provided and should multiply 3 different primes instead of 2. To do this, you should replace the function that gets the first n primes with a function that gets the primes in a ran...
Change the function name to `sum_of_prime_products_in_range` with `start` and `end` as the parameters. It should consider the range that is provided and should multiply 3 different primes instead of 2.
{ "change_kind": "perfective", "libraries": [], "topic": "DSA" }
67
test_invariants
67_test_invariants
class Employer: """ Represents an entity that employs workers. """ def __init__(self, name, funds): self.name = name self.funds = funds class Worker: """ Represents a person who does work for an employer. Name should be "[first name] [last name]" and pay should be pos...
class Employer: """ Represents an entity that employs workers. """ def __init__(self, name, funds): self.name = name self.funds = funds class Worker: """ Represents a person who does work for an employer. Name should be "[first name] [last name]" and pay should be pos...
### START TESTS ### if True: # pragma: no cover def assert_raises(exc_type, func, *args, **kwargs): try: func(*args, **kwargs) except exc_type: pass else: raise AssertionError( f"{func.__name__} did not raise {exc_type.__name__}") # s...
Write two functions `test_worker_invariants(w: Worker)` and `test_public_worker_invariants(w: PublicWorker)`. The `Worker` and `PublicWorker` classes have several invariants, including that the name field is first name and last name separated by a space, and that the pay is non-negative, and all the semantics of givePa...
Write two functions `test_worker_invariants(w: Worker)` and `test_public_worker_invariants(w: PublicWorker)` that assert all the invariants of the classes on the given object.
{ "change_kind": "perfective", "libraries": [], "topic": "Misc" }
12
linkedlist_sort
12_linkedlist_sort
from abc import ABC, abstractmethod class LinkedList: @abstractmethod def sort(self): pass @abstractmethod def remove(self, element): pass @abstractmethod def insert(self, element): pass class Cons(LinkedList): def __init__(self, first, rest: LinkedList): s...
from abc import ABC, abstractmethod class LinkedList: @abstractmethod def sort(self): pass @abstractmethod def remove(self, element): pass @abstractmethod def insert(self, element): pass class Cons(LinkedList): def __init__(self, first, rest: LinkedList): se...
### START TESTS ### if True: # pragma: no cover e = Empty() c1 = Cons(1, e) c2 = Cons(2, c1) duplicates = Cons(1, Cons(2, Cons(1, e))) assert e == e.remove(1) assert e == e.sort() assert e.insert(1).first == 1 assert e.insert(1).rest == e assert c1.first == 1 assert c1.rest == e...
Change all the classes so that they support a method `remove(element)` which returns a new list with the first instance of the element removed. Return an identical list if the element is not in the list.
Change the code so that it supports a remove element method called `remove` that removes the first occurrence of a value.
{ "change_kind": "adaptive", "libraries": [], "topic": "DSA" }
70
sieve_of_eratosthenes
70_sieve_of_eratosthenes
def find_primes(end: int): primes = [] is_prime = [True] * (end + 1) for num in range(1, int(end**0.5) + 1): if is_prime[num]: primes.append(num) for multiple in range(num * num, end + 1, num): is_prime[multiple] = False for num in range(int(end**0.5) +...
def find_primes(end: int): primes = [] is_prime = [True] * (end + 1) for num in range(2, int(end**0.5) + 1): if is_prime[num]: primes.append(num) for multiple in range(num * num, end + 1, num): is_prime[multiple] = False for num in range(int(end**0.5) +...
### START TESTS ### if True: # pragma: no cover assert find_primes(2) == [2] assert find_primes(10) == [2, 3, 5, 7] assert find_primes(40) == [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37] assert find_primes(100) == [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, ...
The algorithm is returning a list with only 1 in it. Fix it so it correctly performs the Sieve of Eratosthenes with the given limit.
Fix the given function to return the correct primes.
{ "change_kind": "corrective", "libraries": [], "topic": "Math" }
71
euclidean_algorithm
71_euclidean_algorithm
def gcd(a, b): return a if b == 0 else gcd(a % b, b) def lcm(a, b): return (a * b) / gcd(a, b)
def gcd(a, b): return a if b == 0 else gcd(b, a % b) def lcm(a, b): return (a * b) / gcd(a, b)
### START TESTS ### if True: # pragma: no cover assert gcd(30, 10) == 10 assert gcd(63, 81) == 9 assert gcd(99, 121) == 11 assert gcd(2, 2) == 2 assert gcd(48, 60) == 12 assert lcm(81, 108) == 324 assert lcm(63, 81) == 567 assert lcm(12, 18) == 36 assert lcm(4, 6) == 12 as...
The code is recursing infinitely when one tries to compute the least common multiple. Fix the code to correctly compute the least common multiple and the greatest common divisor
Fix the code to correctly compute the LCM and GCD without running infinitely.
{ "change_kind": "corrective", "libraries": [], "topic": "Math" }
72
disjoint_cycles
72_disjoint_cycles
def find_cycles(permutation): cycles = [] visited = set() for i in range(len(permutation)): if i not in visited: cycle = [] current = i while current not in visited: visited.add(current) cycle.append(current) ...
def find_cycles(permutation): permutation = [0] + permutation cycles = [] visited = set() for i in range(len(permutation)): if i not in visited: cycle = [] current = i while current not in visited: visited.add(current) cycle...
### START TESTS ### def cycle_equality(c1, c2): """ Takes two lists, c1 and c2, and returns True if the two lists represent the same cycle within a permutation group. """ if len(c1) != len(c2): return False start_index_b = c2.index(c1[0]) if c1[0] in c2 else -1 if start_index_b == -1: ...
Correct the `find_cycles` function to use 1-based indexing instead of 0-based indexing. So instead of taking a 0-based input list like [4, 1, 0, 2, 3], it would take a 1-based list like [5, 2, 1, 3, 4].
Fix the `find_cycles` function work for 1-based indices.
{ "change_kind": "corrective", "libraries": [], "topic": "Math" }
73
permutation_equality
73_permutation_equality
def cycle_equality(c1, c2): """ Takes two lists, c1 and c2, and returns True if the two lists represent the same cycle within a permutation group. """ if len(c1) != len(c2): return False start_index_b = c2.index(c1[0]) if c1[0] in c2 else -1 if start_index_b == -1: return False...
def cycle_equality(c1, c2): """ Takes two lists, c1 and c2, and returns True if the two lists represent the same cycle within a permutation group. """ if len(c1) != len(c2): return False start_index_b = c2.index(c1[0]) if c1[0] in c2 else -1 if start_index_b == -1: return False...
### START TESTS ### assert cycle_equality([1, 2, 3, 4], [4, 1, 2, 3]) assert cycle_equality([4, 5, 2, 1, 9], [5, 2, 1, 9, 4]) assert cycle_equality([3, 5, 2], [3, 5, 2]) assert cycle_equality([0, 5, 3, 9], [5, 3, 9, 0]) assert not cycle_equality([0, 5, 3], [5, 3, 9, 0]) assert not cycle_equality([4, 5, 2, 9, 1], [5, 2...
Fix the `permutation_equality` function to only return True when the sublists in each of the two input lists are pairwise equal according to the `cycle_equality` function. That is, each sublist in the first list must be paired with and equal to exactly one sublist from the second list.
Fix the `permutation_equality` function so it only returns True if each sublist of list A is paired with and equal to exactly one sublist from list B.
{ "change_kind": "corrective", "libraries": [], "topic": "DSA" }
76
memory_alloc
76_memory_alloc
from typing import Any, List class Free: def __repr__(self): return "Free" # singleton FREE = Free() class MemoryAllocation: def __init__(self, size, address, buf): self.size = size self.address = address self.buffer = buf def __repr__(self): return f"MemoryAll...
from typing import Any, List class Free: def __repr__(self): return "Free" # singleton FREE = Free() class MemoryAllocation: def __init__(self, size, address, buf): self.size = size self.address = address self.buffer = buf def __repr__(self): return f"MemoryAll...
### START TESTS ### if True: # pragma: no cover assert FREE.__repr__() == "Free" m1 = MemoryAllocator(100) a1 = m1.allocate(10) assert a1.__repr__() == "MemoryAllocation(size=10, address=0)" assert a1 is not None a1.write([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) assert a1.buffer == [1, 2, 3, 4, 5, 6...
Fix the `write` function in `MemoryAllocation`, which has a buffer overflow bug. Do not throw an exception if the buffer is full; just write as much as possible.
Fix the buffer overflow when writing memory, make sure to not throw an exception.
{ "change_kind": "corrective", "libraries": [], "topic": "Misc" }
77
step_counter
77_step_counter
class StepCounter: def __init__(self): self.steps = 0 self.distance = 0.0 # distance in kilometers self.steps_per_km = 1250 # average steps per km for walking def add_steps(self, steps): self.steps += steps self._update_distance() def _update_distance(self): ...
class StepCounter: def __init__(self): self.steps = 0 self.distance = 0.0 # distance in kilometers self.steps_per_km = 1250 # average steps per km for walking def add_steps(self, steps): self.steps += steps self._update_distance() def _update_distance(self): ...
### START TESTS ### if True: # pragma: no cover tracker = FitnessTracker() tracker.record_activity(2500) tracker.record_activity(1250) assert tracker.get_summary() == "Total steps: 3750, Total distance: 3 km" tracker.record_activity(1000) assert tracker.get_summary() == "Total steps: 4750, Tot...
Fix the bug that happens when the user adds exactly the steps_per_km number of steps; it does not update the distance correctly.
The distance is not updated correctly, fix the bug.
{ "change_kind": "corrective", "libraries": [], "topic": "Misc" }
78
llm_inference
78_llm_inference
from flask import Flask, request, jsonify from threading import Lock from vllm import LLM, SamplingParams HUMAN_HEADER = "Question:" AI_HEADER = "Answer:" class Inferencer: def __init__(self, model_name): self.model_name = model_name self.model_lock = Lock() self.model = None def get...
from flask import Flask, request, jsonify from threading import Lock from vllm import LLM, SamplingParams HUMAN_HEADER = "Question:" AI_HEADER = "Answer:" class Inferencer: def __init__(self, model_name): self.model_name = model_name self.model_lock = Lock() self.model = None def get...
### START TESTS ### if True: # pragma: no cover i1 = Inferencer("bigcode/starcoder") # mock LLM classes class MockOutput: def __init__(self, text): self.text = text class MockResult: def __init__(self, outputs): self.outputs = outputs class LLMMock: ...
Fix the code to be defensive against invalid requests in `predict_from_json`, protect against requests: without the `conversation` key, where `conversation` is not a non-empty list of strings, and the number of messages in the conversation is not odd.
Fix the code to be defensive against invalid requests in `predict_from_json`.
{ "change_kind": "corrective", "libraries": [ "vllm", "flask" ], "topic": "Data Science" }
79
int_to_key
79_int_to_key
import abc class Encoder(abc.ABC): @abc.abstractmethod def encode(self, n: int) -> str: raise NotImplementedError class LowerAlphaEncoder(Encoder): def encode(self, n: int) -> str: key = "" while n > 0: n, remainder = divmod(n - 1, 26) key = chr(97 + remaind...
import abc class Encoder(abc.ABC): @abc.abstractmethod def encode(self, n: int) -> str: raise NotImplementedError class LowerAlphaEncoder(Encoder): def encode(self, n: int) -> str: key = "" while n > 0: n, remainder = divmod(n - 1, 26) key = chr(97 + remaind...
### START TESTS ### if True: # pragma: no cover encoder0 = LowerAlphaEncoder() encoder1 = UpperAlphaEncoder() encoder2 = UpperAlphaNumericEncoder() n0 = 0 assert encoder0.encode(n0) == "" assert encoder1.encode(n0) == "" assert encoder2.encode(n0) == "" n1 = 1 assert encoder0.encode...
Fix the upper alpha numeric encode function to use upper alpha characters every 3 places, not 2. To do this, switch is_alpha to char_count and do char_count % 3 to check if the next character should be upper alpha
Fix the upper alpha numeric encode function to use upper alpha characters every 3 places, not 2
{ "change_kind": "corrective", "libraries": [], "topic": "Language" }
80
circular_queue
80_circular_queue
class CircularQueue: def __init__(self, capacity): self.capacity = capacity self.queue = [None] * capacity self.front = self.rear = -1 def enqueue(self, item): if self.is_full() or not self.is_empty(): self.front = (self.front + 1) % self.capacity elif self.i...
class CircularQueue: def __init__(self, capacity): self.capacity = capacity self.queue = [None] * capacity self.front = self.rear = -1 def enqueue(self, item): if self.is_full(): self.front = (self.front + 1) % self.capacity elif self.is_empty(): ...
### START TESTS ### if True: # pragma: no cover capacity = 3 cq = CircularQueue(capacity) assert cq.is_empty() == True, "is_empty() should return True for an empty queue" assert cq.is_full() == False, "is_full() should return False for an empty queue" cq.enqueue(1) cq.enqueue(2) cq.enqueue(...
Correct the condition in enqueue to prevent item overwriting when the queue is not full. In the enqueue method, modify the condition that checks whether the queue is full before overwriting elements. Ensure that elements are only overwritten when the queue is genuinely full, preserving the integrity of the data structu...
Fix the condition in enqueue to prevent item overwriting when the queue is not full.
{ "change_kind": "corrective", "libraries": [], "topic": "DSA" }
81
linked_list_debug
81_linked_list_debug
class Node: def __init__(self, value: int) -> None: self.value = value self.next = None class LinkedList: def __init__(self): self.head = None def add(self, value: int) -> None: if not self.head: self.head = Node(value) else: current = se...
class Node: def __init__(self, value: int) -> None: self.value = value self.next = None class LinkedList: def __init__(self): self.head = None def add(self, value: int) -> None: if not self.head: self.head = Node(value) else: current = se...
### START TESTS ### if True: # pragma: no cover def test_add_elements(): linked_list = LinkedList() linked_list.add(1) linked_list.add(2) assert linked_list.head.value == 1, "Head should be 1" assert linked_list.head.next.value == 2, "Second element should be 2" def test...
Fix the error in the find method that is causing elements to not be found. To do this, the method should be adapted to search in a loop for the next element by iteratively setting current to current.next
Fix the error in the find method that is causing elements to not be found
{ "change_kind": "corrective", "libraries": [], "topic": "DSA" }
85
dpll
85_dpll
from copy import deepcopy from typing import Optional class DPLLSolver: def __init__(self, cnf): """ initializes the DPLL Solver with a given CNF (Conjunctive Normal Form) input. :param cnf: a string representing the CNF, where each clause is on a new line, literals ar...
from copy import deepcopy from typing import Optional class DPLLSolver: def __init__(self, cnf): """ initializes the DPLL Solver with a given CNF (Conjunctive Normal Form) input. :param cnf: a string representing the CNF, where each clause is on a new line, literals ar...
### START TESTS ### if True: # pragma: no cover input1 = 'A\n!A' assert DPLLSolver(input1).dpll() is None input2 = 'A' assert DPLLSolver(input2).dpll() == {'A': True} false_input = '!A' assert DPLLSolver(false_input).dpll() == {'A': False} false_double_input = '!A\nA' assert DPLLSolv...
Correct the logic of the solver, it is currently not backtracking on empty clauses, which are unsatisfiable. If found, the solver should undo assignments made in the current decision level.
Fix the solver, it does not backtrack on empty clauses.
{ "change_kind": "corrective", "libraries": [], "topic": "DSA" }
86
pyast
86_pyast
import ast class UsageCounter(ast.NodeVisitor): """ Counts the usages of each identifier in the given AST. An usage does not count the definition or assignment itself; only identifiers that are used after their definition/assignment are counted. NOTE: This class does not handle the scoping rules o...
import ast class UsageCounter(ast.NodeVisitor): """ Counts the usages of each identifier in the given AST. An usage does not count the definition or assignment itself; only identifiers that are used after their definition/assignment are counted. NOTE: This class does not handle the scoping rules o...
### START TESTS ### if True: # pragma: no cover complex_ish = """ a = 1 b = 2 y, z = 3, 4 print(a + b) print(y + z) def f(x, arg=2): return x + a + arg print(f(1)) print(f(2)) print(f(3)) """ parsed = ast.parse(complex_ish) uc = UsageCounter() uc.visit(parsed) assert uc.usages == {'a': 2, 'b'...
Correct the visitor by also adding function argument definitons to the set of usages, in addition to adding support for Tuple assignments (e.g. `a, b = 1, 2`).
Fix the visitor by adding support for argument definitions and tuple assignments.
{ "change_kind": "corrective", "libraries": [], "topic": "Language" }
87
documentation
87_documentation
import ast from typing import Tuple def build_documentation(code: str) -> Tuple[str, str]: results = [] parsed_ast = ast.parse(code) def visit_FunctionDef(node: ast.FunctionDef) -> None: name = node.name args_node = node.args return_annotation = node.returns if return_annot...
import ast from typing import Tuple def build_documentation(code: str) -> Tuple[str, str]: results = [] parsed_ast = ast.parse(code) def visit_FunctionDef(node: ast.FunctionDef) -> None: name = node.name args_node = node.args return_annotation = node.returns if return_annot...
### START TESTS ### if True: # pragma: no cover code = '''def test_function_no_args(): """This is a test function with no arguments.""" pass def test_function_with_args(arg1, arg2) -> str: """Test function with arguments.""" return "" def add(a, b) -> int: return a + b def add_typed(a: int, b...
Handle the case that a type annotation does not exist on an arg. To do this, check if the type annotation exists first, and prepend ": " to the label if so.
Handle the case that a type annotation does not exist on an arg
{ "change_kind": "corrective", "libraries": [], "topic": "Language" }
88
correlation_clustering
88_correlation_clustering
import numpy as np import pandas as pd from scipy.cluster.hierarchy import linkage, fcluster from scipy.spatial.distance import squareform class FeatureSelector: """Selects features from a set of data according to their correlations""" def __init__(self, data: pd.DataFrame, columns: list[str]): self....
import numpy as np import pandas as pd from scipy.cluster.hierarchy import linkage, fcluster from scipy.spatial.distance import squareform class FeatureSelector: """Selects features from a set of data according to their correlations""" def __init__(self, data: pd.DataFrame, columns: list[str]): self....
### START TESTS ### import numpy as np import pandas as pd from scipy.cluster.hierarchy import linkage, fcluster from scipy.spatial.distance import squareform house_data = { 'Location': ['Location 1', 'Location 2', 'Location 3', 'Location 4', 'Location 5', 'Location 6', 'Location 7', 'Location 8',...
The code given clusters and selects features based on the calculated correlation between the selected columns, fix the code so that the calcualted dissimilarity matrix is symmetric, so it can be used to calculate Z and the labels.
Fix the error that in this code that causes the ValueError that the distance matrix 'X' must be symmetric.
{ "change_kind": "corrective", "libraries": [ "scipy", "pandas", "numpy" ], "topic": "Data Science" }
89
palindrome_detector
89_palindrome_detector
def reverseString(originalString): reversedString = "" for i in range(0, len(originalString)): reversedString += originalString[i] return reversedString def isPalindrome(originalString): return originalString.lower() == reverseString(originalString.lower())
def reverseString(originalString): reversedString = "" for i in range(len(originalString)-1, -1, -1): reversedString += originalString[i] return reversedString def isPalindrome(originalString): return originalString.lower() == reverseString(originalString.lower())
### START TESTS ### assert isPalindrome("dad") == True assert isPalindrome("madamimadam") == True assert isPalindrome("a") == True assert isPalindrome("KaYaK") == True assert isPalindrome("CIVIC") == True assert isPalindrome("computer") == False assert isPalindrome("ab") == False
The function reverseString outputs the same string as originalString, but it should output originalString in reverse. For example, reverseString("hi") should return "ih".
I want reverseString to reverse the string, but it's not.
{ "change_kind": "corrective", "libraries": [], "topic": "Language" }
90
dna_transcriber
90_dna_transcriber
def dnaToRna(base): if base == "T": return "A" elif base == "A": return "U" elif base == "C": return "G" elif base == "G": return "C" def transcribe(dna): rna = "" for i in range(len(dna)-1): rna += dnaToRna(dna[i]) return rna
def dnaToRna(base): if base == "T": return "A" elif base == "A": return "U" elif base == "C": return "G" elif base == "G": return "C" def transcribe(dna): rna = "" for i in range(len(dna)): rna += dnaToRna(dna[i]) return rna
### START TESTS ### assert transcribe("TACTAGA") == "AUGAUCU" assert transcribe("C") == "G" assert transcribe("GCTAT") == "CGAUA" assert transcribe("") == ""
Fix my program, which isn't working because the output of transcribe is always one character too short. For example, transcribe("TACTAGA") should return "AUGAUCU", but it returns "AUGAUC" instead.
Fix my program, which isn't working because the output of transcribe is always one character too short.
{ "change_kind": "corrective", "libraries": [], "topic": "Misc" }
91
interest_calculator
91_interest_calculator
def simpleInterest(principal, rate, periods): return principal * rate * periods def compoundInterest(principal, rate, compoundFreq, periods): return principal * ((1 + (rate / compoundFreq)) * (compoundFreq * periods))
def simpleInterest(principal, rate, periods): return principal * rate * periods def compoundInterest(principal, rate, compoundFreq, periods): return principal * ((1 + (rate / compoundFreq)) ** (compoundFreq * periods))
### START TESTS ### assert abs(compoundInterest(10000, .08, 4, 5) - 14859.47) < .01 assert abs(compoundInterest(10, .01, 2, 1) - 10.10) < .01 assert abs(compoundInterest(40000, .035, 12, 10) - 56733.79) < .01 assert abs(compoundInterest(1000, .05, 1, 1) - 1050) < .01 assert abs(compoundInterest(1000, .05, 1, 2) - 1102....
I want compoundInterest to return the correct compound interest. For example, compoundInterest(10000, .08, 4, 5) should return 14859.47.
I want compoundInterest to return the correct compound interest.
{ "change_kind": "corrective", "libraries": [], "topic": "Misc" }
92
heron_area
92_heron_area
import math def heronArea(sideLength1, sideLength2, sideLength3): semiperimeter = (sideLength1 + sideLength2 + sideLength3)/2 return math.sqrt(semiperimeter * (semiperimeter - sideLength1) * (semiperimeter - sideLength2) * semiperimeter - sideLength3)
import math def heronArea(sideLength1, sideLength2, sideLength3): semiperimeter = (sideLength1 + sideLength2 + sideLength3)/2 return math.sqrt(semiperimeter * (semiperimeter - sideLength1) * (semiperimeter - sideLength2) * (semiperimeter - sideLength3))
### START TESTS ### import math assert abs(heronArea(3, 4.5, 6) - 6.53) < .01 assert abs(heronArea(3, 4, 5) - 6.0) < .01 assert abs(heronArea(5.5, 3.7, 5.5) - 9.58) < .01 assert heronArea(0.1, 0.1, 0.1) > 0 assert math.isclose(heronArea(1000, 1000, 1000), math.sqrt(1500 * (500 ** 3)))
I want heronArea to return the heron area. For example, heronArea(3, 4, 5) should return 6.0.
I want my program to return the heron area.
{ "change_kind": "corrective", "libraries": [], "topic": "Math" }
94
knn
94_knn
from typing import List from math import sqrt class Label: def __init__(self, name: str) -> None: self.name = name def __hash__(self) -> int: return 1 def __eq__(self, __value: object) -> bool: return True class Point: def __init__(self, x: int, y: int, label: Label | None)...
from typing import List, Tuple from math import sqrt class Label: def __init__(self, name: str) -> None: self.name = name def __eq__(self, __value: object) -> bool: if isinstance(__value, Label): return __value.name == self.name return False def __hash__(self) -> int:...
### START TESTS ### if True: # pragma: no cover origin = Point(0, 0, None) one_one = Point(1, 1, Label("one")) two_two = Point(2, 2, Label("two")) two_two_neg = Point(-2, -2, Label("one")) three_three = Point(3, 3, Label("two")) three_three_2 = Point(3, 3, Label("two")) assert origin == ori...
fix the k-nearest neighbors method on the Point class so that `point.knn(others: List[Point], k: int)` which takes the k closest neighbors and returns the label of the largest subset of neighbors with the same label.
fix the k-nearest neighbors method on the Point class.
{ "change_kind": "corrective", "libraries": [], "topic": "Data Science" }
95
dbscan
95_dbscan
import numpy as np from scipy.spatial import distance_matrix from collections import deque class DBSCAN: def __init__(self, eps: float = 0.5, min_samples: int = 5) -> None: self.eps = eps self.min_samples = min_samples self.labels_ = [] def fit(self, X: np.ndarray) -> None: n_s...
import numpy as np from scipy.spatial import distance_matrix from collections import deque class DBSCAN: def __init__(self, eps: float = 0.5, min_samples: int = 5) -> None: self.eps = eps self.min_samples = min_samples self.labels_ = [] def fit(self, X: np.ndarray) -> None: n_s...
### START TESTS ### if True: # pragma: no cover x_0_blob_0 = (0, 0) x_1_blob_0 = (0, 0.1) x_2_blob_0 = (0.1, 0) x_3_blob_0 = (0.2, -0.1) x_0_blob_1 = (2, 2) x_1_blob_1 = (2, 2.1) x_2_blob_1 = (2.1, 2) x_3_blob_1 = (2.2, 2.1) x_0_blob_2 = (0, 2) x_1_blob_2 = (0, 2.1) x_2_blob_...
Track a visited list to prevent clustered samples from being revisited. To do this, instantiate a bitmap in the `fit` method and skip over visited samples in the loop over samples. Also, send the visited list to the `_expand_cluster` method and only expand with samples that have not been visited yet.
Track a visited set to prevent clustered samples from being revisited
{ "change_kind": "corrective", "libraries": [ "numpy", "scipy" ], "topic": "Data Science" }
96
distribution_clustering
96_distribution_clustering
import numpy as np from scipy.stats import multivariate_normal class GMM: def __init__(self, n_components: int, n_iter: int) -> None: self.n_components = n_components self.n_iter = n_iter self.means = None self.covariances = None self.pi = None self.reg_covar = 1e-6 ...
import numpy as np from scipy.stats import multivariate_normal class GMM: def __init__(self, n_components: int, n_iter: int) -> None: self.n_components = n_components self.n_iter = n_iter self.means = None self.covariances = None self.pi = None self.reg_covar = 1e-6 ...
### START TESTS ### if True: # pragma: no cover x_0_blob_0 = (0, 0) x_1_blob_0 = (0, 0.1) x_2_blob_0 = (0.1, 0) x_3_blob_0 = (0.2, -0.1) x_4_blob_0 = (0.1, 0.1) x_5_blob_0 = (0.2, 0) x_6_blob_0 = (0, 0.01) x_7_blob_0 = (0.01, 0) x_8_blob_0 = (0.1, 0.01) x_9_blob_1 = (2, 2) x_...
Fix an error in which the covariant matrices may not be definite positive. To do this, apply a small regularization term to the matrices by adding some epsilon to the diagonal of the covariant matrices.
Fix an error in which the covariant matrix may not be definite positive
{ "change_kind": "corrective", "libraries": [ "numpy", "scipy" ], "topic": "Data Science" }
101
house_prices
101_house_prices
from typing import List, Tuple class House: def __init__(self, location: Tuple[int, int], bedrooms: int, bathrooms: int): self.location = location self.bedrooms = bedrooms self.bathrooms = bathrooms def distance_to(self, other: 'House') -> float: return ((self.location[0] - ot...
from typing import List, Tuple class House: def __init__(self, location: Tuple[int, int], bedrooms: int, bathrooms: int): self.location = location self.bedrooms = bedrooms self.bathrooms = bathrooms def distance_to(self, other: 'House') -> float: return ((self.location[0] - ot...
### START TESTS ### if True: # pragma: no cover a = House((0, 0), 3, 2) b = House((1, 1), 4, 3) c = House((2, 2), 2, 1) d = House((3, 3), 3, 2) e = House((4, 4), 4, 3) f = House((5, 5), 2, 1) g = House((6, 6), 100, 100) # huge mansion! house1 = House((10, 20), 3, 2) assert house1.l...
Add a method `estimate_location(self, other_houses: List['House']) -> Tuple[float, float]` that returns the estimated appropriate location for the house based on the average location of the 5 closest houses in terms of price, where the price of other houses is calculated using the estimate_price method. Do not modify t...
Add a method `estimate_location` that returns the estimated the appropriate location for this house, calculated by getting the average location of the top 5 most similar houses in terms of estimated price.
{ "change_kind": "adaptive", "libraries": [], "topic": "Math" }
102
nfa
102_nfa
from typing import Literal, List Input = Literal["a", "b", ""] State = Literal[0, 1, 2] class NFA: def __init__(self) -> None: self.current: State = 0 self.accept: set[State] = {1, 2} def transition(self, input: Input) -> List[State]: table = { 0: {"a": [1, 2], "b": [], "...
from typing import Literal, List Input = Literal["a", "b", ""] State = Literal[0, 1, 2, 3] class DFA: def __init__(self) -> None: self.current: State = 0 self.accept: set[State] = {1} def transition(self, input: Input) -> State: table: dict[State, dict[Input, State]] = { ...
### START TESTS ### if True: def acceptsString(dfa: DFA, word: List[Input]) -> bool: for symbol in word: dfa.current = dfa.transition(symbol) return dfa.accepted() assert acceptsString(DFA(), ["", "", "", "a"]) assert acceptsString(DFA(), ["", "", "a"]) assert acceptsString...
change the class so that it represents an equivalent deterministic finite automaton called DFA. This entails that the transition method should now have signature `transition(self, input: Input) -> State`. An automaton is equivalent if the languages that they both accept are the same.
change the class so that it represents an equivalent deterministic finite automaton called DFA
{ "change_kind": "adaptive", "libraries": [], "topic": "Language" }
2
cov_corr
2_cov_corr
class Probability: def sample_mean(self, X): """Computes the sample mean of the data""" return sum(X) / len(X) def variance(self, X): """Computes the variance of the data""" mean = sum(X) / len(X) return sum((x - mean) ** 2 for x in X) / len(X) def correlation(self...
class Probability: def sample_mean(self, X): """Computes the sample mean of the data""" return sum(X) / len(X) def variance(self, X): """Computes the variance of the data""" mean = sum(X) / len(X) return sum((x - mean) ** 2 for x in X) / len(X) def covariance(s...
### START TESTS ### if True: # pragma: no cover X1 = [1.2, 3.5, 7.8, 4.6, 5.7, 8.9, 6.4, 10.2, 3.9, 7.1] X2 = [0.5, 2.3, 4.7, 6.9, 16.0, 18.2, 20.5, 22.7, 24.9] X3 = [2.75, 3.82, 5.16, 6.91, 9.24, 19.45, 21.18, 23.56, 25.99] X4 = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7] assert round(Probability()....
Flip the correlation function given to calculate instead the covariance using the correlation between X and Y, the variance of X and the variance of Y. Rearrange the equations and replace the correlation function by a function that takes in the correlation, variance of X and variance of Y, in that order.
Flip the correlation function given to calculate the covariance instead using the Corr(X, Y), Var(X) and Var(Y). The new function should take in Corr(X, Y), Var(X) and Var(Y) in that order.
{ "change_kind": "adaptive", "libraries": [], "topic": "Math" }
97
nash_equilibrium
97_nash_equilibrium
from typing import List, Tuple class Cell: def __init__(self, pay1, pay2): self.pay1 = pay1 self.pay2 = pay2 class Game: def __init__(self, p1: List[str], p2: List[str], payoffs: List[List[Cell]]) -> None: """ p1: list of strategies for player 1 p2: list of strategies...
from typing import List, Tuple class Cell: def __init__(self, pay1, pay2): self.pay1 = pay1 self.pay2 = pay2 class Game: def __init__(self, p1: List[str], p2: List[str], payoffs: List[List[Cell]]) -> None: """ p1: list of strategies for player 1 p2: list of strategies...
### START TESTS ### if True: # pragma: no cover p1 = ["X", "Y"] p2 = ["A", "B"] payoffs = [ [Cell(1, 2), Cell(2, 1)], [Cell(3, 3), Cell(4, 4)] ] game = Game(p1, p2, payoffs) assert len(game.p1) == len(payoffs) assert len(game.p2) == len(payoffs[0]) assert all(len(row) == ...
Add a new method to the `Game` class called `nash_equilibriums(self) -> List[Tuple[str, str]]` that returns a list of Nash equilibriums for the game, with each pair being the strategy for player 1 and player 2. If there are no Nash equilibriums, return an empty list. A nash equilibrium happens when both players are pl...
Write a method `nash_equilibrium(self) -> List[Tuple[str, str]]` in the Game class that returns the nash equilibrium(s) as (s1, s2) pairs.
{ "change_kind": "adaptive", "libraries": [], "topic": "DSA" }
98
encoder_decoder_dataset
98_encoder_decoder_dataset
import torch from typing import List, Tuple from torch.nn.utils.rnn import pad_sequence from abc import ABC, abstractmethod def tokens_to_tensor(token_ids, sp): return torch.cat((torch.tensor([sp.bos_id()]), torch.tensor(token_ids), torch.tensor([sp.eos_id()]))) class...
import torch from typing import List, Tuple from torch.nn.utils.rnn import pad_sequence from abc import ABC, abstractmethod def tokens_to_tensor(token_ids, sp): return torch.cat((torch.tensor([sp.bos_id()]), torch.tensor(token_ids), torch.tensor([sp.eos_id()]))) class...
### START TESTS ### if True: # pragma: no cover class MockTokenizer: def __init__(self): pass def bos_id(self): return 1 def eos_id(self): return 2 def pad_id(self): return 0 def encode_as_ids(self, s): return [o...
Implement the `EncoderDecoderDatasetImpl` class, which is a subclass of `EncoderDecoderDataset`. This class will be used to create the dataset for the encoder-decoder model, and returns a tuple of the input sequence and output sequence from the given data item, which should be split by self.split.
Implement `EncoderDecoderDatasetImpl`.
{ "change_kind": "adaptive", "libraries": [ "torch" ], "topic": "Data Science" }
99
secondary_keys
99_secondary_keys
from typing import Any, Hashable, Optional class KeyValueCache: def __init__(self) -> None: self.primary_cache = {} self.secondary_key_map = {} def put(self, primary_key: Hashable, value: Any, secondary_keys: Optional[list[Hashable]] = None) -> None: self.primary_cache[primary_key] = v...
from typing import Any, Hashable, Optional class KeyValueCache: def __init__(self) -> None: self.primary_cache = {} self.secondary_key_map = {} self.stats = { "hits": 0, "misses": 0, "entries": 0 } def put(self, primary_key: Hashable, value: ...
### START TESTS ### if True: # pragma: no cover def test_cache_statistics(): cache = KeyValueCache() assert cache.get_hits() == 0, "Hits initialization failed" assert cache.get_misses() == 0, "Misses initialization failed" assert cache.get_num_entries() == 0, "Entries initialization...
Add the ability to track hits, misses, and number of entries by adding `get_hits`, `get_misses`, and `get_num_entries` methods. To do this, add an instance variable `stats` that is a dictionary that tracks hits, misses, and the number of entries at the given time. On insertion, deletion, and lookup, update the number o...
Add the ability to track hits, misses, and number of entries by adding `get_hits`, `get_misses`, and `get_num_entries` methods.
{ "change_kind": "adaptive", "libraries": [], "topic": "DSA" }
103
postfix
103_postfix
from typing import Literal, List Op = Literal["+", "-", "*", "/"] Token = int | Op class PostfixParser: def parse(self, inputs: List[Token]) -> float: """parses a sequence of input tokens using postfix notation and computes the result""" def parseHelp(inputs: List[Token], stack: List[float]) -> ...
from typing import Literal, List Op = Literal["+", "-", "*", "/"] Token = int | Op class PostfixParser: def parse(self, inputs: List[Token]) -> float: """parses a sequence of input tokens using postfix notation and computes the result""" def parseHelp(inputs: List[Token], stack: List[float]) -> ...
### START TESTS ### if True: # pragma: no cover pp = PostfixParser() assert pp.parse([1, 2, "+"]) == 3 assert pp.parse([1]) == 1 assert pp.parse([1, 2, 3, "+", "+"]) == 6 assert pp.parse([1, 2, 3, "-", "-"]) == 2 assert pp.parse([1, 2, "-", 1, 2, "-", "-"]) == 0 assert pp.parse([1, 2, "*"]...
the method parse computes an expression represented as a list of tokens in post fix notation. Change it so that it raises an Exception when the input is malformed. To compute an expression in postfix notation 1. scan down the list until there is an operator 2. apply the operator to the last two numbers and replace them...
the method parse computes an expression represented as a list of tokens in post fix notation. Change it so that it raises an Exception when input is malformed.
{ "change_kind": "perfective", "libraries": [], "topic": "Language" }
104
filesystem
104_filesystem
from typing import Callable, List from abc import ABC, abstractmethod class File(ABC): """ Represents a file in the file system. """ def __init__(self, name: str, permissions: int, owner: str): assert 0 <= permissions <= 0o777, "Invalid permissions..." self.name = name self.pe...
from typing import Callable, List from abc import ABC, abstractmethod class File(ABC): """ Represents a file in the file system. """ def __init__(self, name: str, permissions: int, owner: str): assert 0 <= permissions <= 0o777, "Invalid permissions..." self.name = name self.pe...
### START TESTS ### if True: # pragma: no cover regular_file = RegularFile("example.txt", 0o644, "user1", "Hello, world!") assert regular_file.name == "example.txt" assert regular_file.permissions == 0o644 assert regular_file.owner == "user1" assert regular_file.content == "Hello, world!" try: ...
Fix map_files and map_content in Directory, both functions are not traversing the files in the directory correctly, they should call the function recursively for each file in the directory.
Fix both map implementations for Directory, they don't respect the docstring.
{ "change_kind": "corrective", "libraries": [], "topic": "Misc" }
105
descent_methods
105_descent_methods
from typing import List, Tuple import numpy as np from autograd import grad class descent: def __init__( self, step: float = 0.1, max_iter: int = 50, convergence: float = 1e-3, initial_points: Tuple[float, float] = (-1, -0.9), ): self.step = ...
from typing import List, Tuple import numpy as np from autograd import grad class descent: def __init__( self, step: float = 0.1, max_iter: int = 50, convergence: float = 1e-3, initial_points: Tuple[float, float] = (-1, -0.9), ): self.step = ...
### START TESTS ### if True: # pragma: no cover def test_function(x: float) -> float: return (x + 2)*x*(x - 1) assert test_function(1) == 0 assert test_function(0) == 0 assert test_function(-2) == 0 assert abs(grad(test_function)(0.549) - 0) < 1e-2 assert abs(grad(test_function)(-1.25...
Fix the newtons_method_minimum() to converge to the correct value. It seems as if the update from x_n to x_n+1 is not correct. Note that Newton's method for minimum finding aims to find the roots of the gradient of a function, where as the traditional Newton's method simply seeks to find the roots of the given function...
Fix the newtons_method_minimum() to converge to the correct extrema for the given function. Please use the grad() function to compute the gradient a function when necessary.
{ "change_kind": "corrective", "libraries": [ "numpy", "autograd" ], "topic": "Math" }
106
conways_game
106_conways_game
from typing import List class ConwaysGameOfLife: """ Represents a grid of conway's game of life, where each cell is either alive or dead. The rules of the game are the following: 1. Any live cell with fewer than two live neighbors dies, as if by underpopulation. 2. Any live cell with two or three ...
from typing import List class ConwaysGameOfLife: """ Represents a grid of conway's game of life, where each cell is either alive or dead. The rules of the game are the following: 1. Any live cell with fewer than two live neighbors dies, as if by underpopulation. 2. Any live cell with two or three ...
### START TESTS ### if True: # pramga: no cover blinker = [ [0, 0, 0, 0, 0], [0, 0, 1, 0, 0], [0, 0, 1, 0, 0], [0, 0, 1, 0, 0], [0, 0, 0, 0, 0] ] game = ConwaysGameOfLife(blinker.copy()) game.step() new_state = [ [0, 0, 0, 0, 0], [0, 0, 0, 0, ...
Fix the implementation of the `compute_alive_nearby_cells` method in the `GameOfLife` class. The method is currently not taking account of the fact that grids have a limited size, and thus may index out of bounds.
Fix how the alive neighbor count is calculated.
{ "change_kind": "corrective", "libraries": [], "topic": "DSA" }
107
multiindex_sort
107_multiindex_sort
class Comparators: """ A class for that allows for custom comparator actions that work in conjuction with Python's default sorted function Example usage: `sorted(lorem_ipsum, key=Comparators.by_length)` """ def by_length(obj): """Comparing by length of object""" return len(obj) ...
class Comparators: """ A class for that allows for custom comparator actions that work in conjuction with Python's default sorted function Example usage: `sorted(lorem_ipsum, key=Comparators.by_length)` """ def by_length(obj): """Comparing by length of object""" return len(obj) ...
### START TESTS ### if True: # pragma: no cover lorem_ipsum = ["Lorem", "ipsum", "dolor sit", "amet", "consectetur", "adipiscing"] fruits = ["apple", "banana", "orange", "grapefruit", "kiwi", "pear"] makeup = ["ultra shiny liquid lipstick", "brush", "blush", "brown brow pomade", ...
Write a function `sort_with_tiebreaker(items, primary, tiebreaker)` in the `Comparators` class which takes in a list of items, a primary sorting method and a tiebreaker sorting method, which returns the list sorted with the primary comparator, with items that tie in value being sorted by the tiebreaker.
Write a function `sort_with_tiebreaker(items, primary, tiebreaker)` in the `Comparators` class that sorts the items with the primary comparator, and tiebreaks with the tiebreaker comparator.
{ "change_kind": "adaptive", "libraries": [], "topic": "DSA" }
54
strategy
54_strategy
from abc import ABC from abc import abstractmethod from typing import List, Tuple class Strategy(ABC): @abstractmethod def returnMove(self, board: List[List[bool]]) -> Tuple[int, int]: '''Returns a tuple(row, column) which indicates where to move in a 3x3 grid.''' pass class Corner...
from abc import ABC from abc import abstractmethod from typing import List, Tuple class Strategy(ABC): @abstractmethod def returnMove(self, board: List[List[bool]]) -> Tuple[int, int]: '''Returns a tuple(row, column) which indicates where to move in a 3x3 grid.''' pass class Corner...
### START TESTS ### if True: # pragma: no cover # Game tests gameOver = Game(None, None) gameOver.board = [[True, False, True], [False, True, False], [True, False, True]] assert gameOver.gameOver() player1Won = Game(None, None) player1Won.board = [[T...
Create a class `GoodStrategy` which extends `Strategy` such that `Game(GoodStrategy(), CornerStrategy()).player1Won()` returns `True`. This can not be solved by modifying the `Game`, `Strategy`, or `CornerStrategy` classes in any way. The following code describes a tic-tac-toe game which takes in two strategies and det...
Create a strategy `GoodStrategy`, that beats `CornerStrategy`. Do not modify the `Game` class.
{ "change_kind": "adaptive", "libraries": [], "topic": "DSA" }
110
integration
110_integration
from typing import Optional import numpy as np from autograd import grad class integrator: def __init__(self, lower: float, upper: float, stepsize: float): self.lower = lower self.upper = upper self.stepsize = stepsize def rectangle_left(self, f): result = 0 x = self.l...
from typing import Optional import numpy as np from autograd import grad class integrator: def __init__(self, lower: float, upper: float, stepsize: float): self.lower = lower self.upper = upper self.stepsize = stepsize def rectangle_left(self, f) -> float: result = 0 x...
### START TESTS ### if True: # pragma: no cover import math as Math def test_function(x: float) -> float: return 2**x integrator_one = integrator(1, 5, 0.0001) assert abs(integrator_one.rectangle_left(test_function) - 30/Math.log(2)) < 0.1 assert abs(integrator_one.rectangle_middle(test_fu...
Add a method "simpson" to the integrator class that takes in arguments of (self, f) that uses Simpson's rule to integrate the given function f. I am specifically referring to Simpson's 1/3 rule, which approximates an integral by evaluating it at the limits of integration a and b as well as at the point f((a + b)/2).
Add a method "simpson" to the integrator class that takes in arguments of self and a function f that uses Simpson's method to integrate the given function.
{ "change_kind": "adaptive", "libraries": [ "numpy", "autograd" ], "topic": "Math" }
100
pandas_apply
100_pandas_apply
import pandas as pd class StringOperations: """A class containing a series of string operations""" def remove_duplicates(text): """Returns the text with only unique characters""" unique = [] for char in text: if char not in unique: unique.append(char) ...
import pandas as pd class StringOperations: """A class containing a series of string operations""" def remove_duplicates(text): """Returns the text with only unique characters""" unique = [] for char in text: if char not in unique: unique.append(char) ...
### START TESTS ### if True: # pragma: no cover assert StringOperations.remove_duplicates('hello') == 'helo' assert StringOperations.remove_duplicates('mississippi') == 'misp' assert StringOperations.remove_duplicates('python') == 'python' assert StringOperations.remove_duplicates('unique char...
Fix the `calculate_all_properties` and `multi_apply` functions to have the signatures `calculate_all_properties(text, functions)` and `multi_apply(data, col, colnames, functions)`, respectively, so that instead of hardcoding the functions used to calculate the properties, `multi_apply` accepts a list of functions to be...
Fix the `calculate_all_properties` and `multi_apply` functions to have the signatures `calculate_all_properties(text, functions)` and `multi_apply(data, col, colnames, functions)`, respectively, so that both functions take in a list of functions to calculate the properties with, rather than just having hardcoded functi...
{ "change_kind": "corrective", "libraries": [ "pandas" ], "topic": "Data Science" }
111
coprime_euler
111_coprime_euler
import math def gcd(a : int, b : int) -> int: """Compute the Greatest Common Divisor (GCD) of a and b.""" assert a > 0 and b > 0 while b != 0: a, b = b, a % b return a def euler_totient(n : int) -> int: """Compute the Euler's Totient function of n.""" assert n > 0 if n == 1 : retu...
import math def gcd(a : int, b : int) -> int: """Compute the Greatest Common Divisor (GCD) of a and b.""" assert a > 0 and b > 0 while b != 0: a, b = b, a % b return a def euler_totient(n : int) -> int: """Compute the Euler's Totient function of n.""" assert n > 0 if n == 1 : retu...
### START TESTS ### if True: # pragma: no cover assert gcd(1,1) == 1 assert gcd(1,2) == 1 assert gcd(3,7) == 1 assert gcd(4,2) == 2 assert gcd(3123,312) == 3 assert gcd(25,45) == 5 assert gcd(987, 987) == 987 for i in range(1,50): for j in range(1,50): assert gcd(i,j...
Edit the code to include a method `powermod(base : int, exp : int, mod : int) -> int` that computes modular exponentiation, a^b mod c, via successive squaring. Define the such for input a^{1}, it recursively computes a^{1/2} and calculates a^{1/2} * a^{1/2} mod c. Ensure the case where the exponent is 0 returns 1. Upda...
Edit the code to include a method `powermod` that computes modular exponentiation, a^b mod c, via successive squaring. Update `check_coprime_euler` with this new function.
{ "change_kind": "adaptive", "libraries": [], "topic": "DSA" }
112
elliptic_curves
112_elliptic_curves
import random def is_prime(n): """Check if a number is prime.""" if n <= 1: return False for i in range(2, int(n**0.5) + 1): if n % i == 0: return False return True class EllipticCurve: def __init__(self, a : int, b : int, p : int): self.a = a self.b = ...
import random def is_prime(n): """Check if a number is prime.""" if n <= 1: return False for i in range(2, int(n**0.5) + 1): if n % i == 0: return False return True class EllipticCurve: def __init__(self, a : int, b : int, p : int): self.a = a self.b = ...
### START TESTS ### if True: assert is_prime(5) assert not is_prime(16) assert not is_prime(1) curve1 = EllipticCurve(4,4,5) assert curve1.is_on_curve(1,3) assert curve1.is_on_curve(0,2) assert not curve1.is_on_curve(2,2) assert curve1.point_addition((1,3),(1,3)) == (2,0) assert cur...
Edit the code to include a new method `windowed_point_multiplication(self, k: int, P: tuple) -> tuple` that computes elliptic curve point multiplication using the windowing method. That is, given a window size w with a default value of 4, precompute all 2^w powers the given point. Then, as you compute the double-and-ad...
Edit the code to include a new method `windowed_point_multiplication` that computes elliptic curve point multiplication using the windowing method. That is, given a window size w, precompute all 2^w powers the given point, and use the precomputed values in the double-and-add procedure. Ensure `generate_keypair` and `va...
{ "change_kind": "adaptive", "libraries": [], "topic": "Math" }
113
schnorr_zk
113_schnorr_zk
import hashlib from typing import Tuple def keygen(p: int, g: int, x: int) -> Tuple[Tuple[int, int, int], int]: """generate public and private key with given prime (p), base (g), and private key (x).""" y = pow(g, x, p) # public key return (p, g, y), x def prover_commitment(p: int, g: int, r: int) -> T...
import hashlib from typing import Tuple def keygen(p: int, g: int, x: int) -> Tuple[Tuple[int, int, int], int]: """generate public and private key with given prime (p), base (g), and private key (x).""" y = pow(g, x, p) # public key return (p, g, y), x def prover_commitment(p: int, g: int, r: int) -> T...
### START TESTS ### if True: p1 = 106370619031455416265556180880535612754694154891931768764891927199982044991293 g1 = 62396934948727367902534680978401865344491133099510338373553753384248885001077 x1 = 17293013998955379273582941822693540654895591849320486454120541612393742535976 r1 = 24028398142591543250...
Edit the schnorr zero knowledge protocol to be non-interactive. That is, in the zero knowledge procedure replace the `verifier_challenge` function with a new function `hash_to_challenge(t : int, y : int, p : int) -> int` that uses the prover commitment`t`, the public key `y`, and the given prime `p` to generate a secur...
Edit the schnorr zero knowledge protocol to be non-interactive. That is, in the zero knowledge procedure replace the `verifier_challenge` function with a function `hash_to_challenge` that uses the prover commitment, the public key, and the given prime to generate a secure challenge.
{ "change_kind": "adaptive", "libraries": [], "topic": "Math" }
114
grid_world_dp
114_grid_world_dp
import json from typing import Tuple, Literal, List, Union # defining a bunch of types to make the code more readable State = Tuple[int, int] Action = Literal["left", "right", "up", "down"] actions: List[Action] = ["left", "right", "up", "down"] Policy = List[List[Union[List[Action], Literal["TERM"]]]] StateValue = L...
import json from typing import Tuple, Literal, List, Union # defining a bunch of types to make the code more readable State = Tuple[int, int] Action = Literal["left", "right", "up", "down"] actions: List[Action] = ["left", "right", "up", "down"] Policy = List[List[Union[List[Action], Literal["TERM"]]]] StateValue = L...
### START TESTS ### if True: # pragma: no cover p1 = policy_str(policy) assert p1 == """TERM | L | L | L | L | L | L | LD U | LU | LU | LU | LU | LU | LRUD | D U | LU | LU | LU | LU | LRUD | RD | D U | LU | LU | ...
Fix the implementation of the value_iteration function, the way it selects the best actions for a state is incorrect for both the improvement and the evaluation steps.
Fix the implementation of value iteration, the way it gets the best actions for a state is wrong.
{ "change_kind": "corrective", "libraries": [], "topic": "DSA" }
115
arrangement_selections
115_arrangement_selections
import math def permutation(n, r): return int(math.factorial(n) / math.factorial(n - r)) def combination(n, r): return int(math.factorial(n) / (math.factorial(r) * math.factorial(n - r))) def arrangement_unlimited_rep(n, r): return int(n ** r) def combination_unlimited_rep(n, r): return int(combina...
import math def permutation(n, r): return int(math.factorial(n) / math.factorial(n - r)) def combination(n, r): return int(math.factorial(n) / (math.factorial(r) * math.factorial(n - r))) def arrangement_unlimited_rep(n, r): return int(n ** r) def combination_unlimited_rep(n, r): return int(combinat...
### START TESTS ### assert combination(6, 3) == 20 assert combination(3, 2) == 3 assert combination(1, 1) == 1 assert permutation(7, 4) == 840 assert permutation(12, 7) == 3991680 assert combination_unlimited_rep(7, 5) == 330 assert combination_unlimited_rep(5, 3) == 21 assert combination_unlimited_rep(10, 3) == 66 a...
Fix combination_unlimited_rep(), which currently returns the wrong result. The function combination_unlimited_rep() takes two integers, n and r, and is supposed to return the factorial of n+r-1, divided by the factorial of r times the factorial of the n-r. The function should do this by calling on combination() with th...
Fix combination_unlimited_rep() so that it returns the right result. The function combination_unlimited_rep should be returning the combination of n-r+1 and n by calling on combination() with those arguments.
{ "change_kind": "corrective", "libraries": [], "topic": "Math" }