Skip to content
This repository has been archived by the owner on May 25, 2022. It is now read-only.

Search Wikipedia from the terminal project #596

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
Binary file added .DS_Store
Binary file not shown.
Binary file modified projects/.DS_Store
Binary file not shown.
27 changes: 27 additions & 0 deletions projects/wikipedia_search/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
# Terminal Wikipedia
<!--Remove the below lines and add yours -->
Search the terminal for Wikipedia short descriptions of the topics you are interested in.

### Prerequisites
<!--Remove the below lines and add yours -->
Modules required to be able to use the script

````
pip install colorama
pip install requests
pip install beautifulsoup4
````

### How to run the script
<!--Remove the below lines and add yours -->
Steps on how to run the script along with suitable examples.
````
python wiki.py [topic]
````
or just execute the script and you'll be prompted to enter the topic.

### Author
#
<!--Remove the below lines and add yours -->
* Github - https://github.com/TheMambaDev <br>
* Twitter - https://twitter.com/TheMambaDev
51 changes: 51 additions & 0 deletions projects/wikipedia_search/wiki.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
import sys
from bs4 import BeautifulSoup
from numpy import full
import requests
from colorama import Fore

keyword = sys.argv[1:]

if keyword == []:
keyword = input("Please enter a keyword: ")
else:
keyword = "_".join(keyword)

print("Searching for:", keyword + "\n")

req = requests.get("https://en.wikipedia.org/api/rest_v1/page/html/" + keyword)

wiki = BeautifulSoup(req.text, "html.parser").find_all("p")
link = BeautifulSoup(req.text, "html.parser").find("a")["href"]
full_text = []

# get the first 10 paragraphs
for i, p in enumerate(wiki):
if i == 10:
break

if p.text == "":
continue

full_text.append(p.text + "\n")

# if there is no text, alert the user about insufficient data
if len(full_text) < 2:
print(Fore.YELLOW + "No info found for this keyword.\n")
exit()

# highlight the first paragraph
full_text[0] = Fore.GREEN + full_text[0] + Fore.RESET

# add link to the end of the text
full_text[-1] = (
"for more information please visit: "
+ Fore.CYAN
+ "https://en.wikipedia.org/wiki/"
+ link[2:]
+ Fore.RESET
)

print("\n".join(full_text))