Skip to content

Latest commit

 

History

History
70 lines (53 loc) · 2.08 KB

File metadata and controls

70 lines (53 loc) · 2.08 KB

in4m

In4m-img

A Command Line Utility To Stay Up to Date!

Libraries

The following libraries were used in this project:

Library Purpose
cloudscraper Retrieve website content reliably from sites that use Cloudflare.
beautifulsoup4 Parse and extract HTML content.

Installation

# Create a virtual environment (recommended)
python3 -m venv in4m-venv

# Activate the environment
source in4m-venv/bin/activate

# Install requirements
pip3 install -r requirements.txt

# Make the file in4m executable if not and Done!

Usage

usage: in4m [-h] [--no-cache] [--limit LIMIT] [--set-sources SET_SOURCES] [--list-sources] [--no-links] [--keyword KEYWORD]

A tool to get up-to-date information security news from curated sources

options:
  -h, --help            show this help message and exit
  --no-cache, -nc       Force scraping and ignore cached results
  --limit LIMIT, -l LIMIT
                        Limit number of news items per source
  --set-sources SET_SOURCES, -ss SET_SOURCES
                        Set the news sources to use (comma-separated list of source Names or Numbers NO-SPACE or `all` for all sources)
  --list-sources, -ls   List available and currently selected news sources
  --no-links, -nl       Do not show URLs to news
  --keyword KEYWORD, -k KEYWORD
                        Search for keywords in news titles

Examples

Set three sources you like and trigger scraping.

./in4m -ss hackread,watchtowr,1 --no-cache

Set all sources.

./in4m -ss all -nc

List sources.

./in4m -ls

Look for keyword cve (case insensitive).

./in4m -k cve -l3

Important: To avoid unnecessary scraping, only the set sources are scraped and saved to the cache, if you want to add new sources that haven't been scraped you will need to add the -nc flag to re-trigger scraping.