|
||
---|---|---|
config | ||
env-configs | ||
nostr | ||
static | ||
templates | ||
tmp | ||
weboftrust | ||
.gitignore | ||
Dockerfile | ||
LICENSE.txt | ||
Makefile | ||
README.md | ||
db.py | ||
dedup.py | ||
docker-compose.yml | ||
fill_in_profiles.py | ||
flaskdemo.ini | ||
follows.py | ||
hoover.py | ||
lib.py | ||
lnd_rest.py | ||
main.py | ||
models.py | ||
npub_name_mapping.py | ||
overlap.py | ||
preferred_relay.py | ||
process_profiles.py | ||
query.py | ||
requirements.txt | ||
runner.py | ||
schema.sql | ||
score.py | ||
update_contacts.py | ||
views.py | ||
wsgi.py |
README.md
noswot
A web of trust made from nostr follows and reports
Welcome to noswot. The vision here is to see what becomes possible when we can trust strangers on the internet.
Hypothesis: A web of trust algorithm that is comprehensive, and can run entirely client-side, will enable a range and volume of activity that the world has never seen.
This trial tests the algorithm in a database web application that gathers follows and reports, then calculates trust scorings from every known perspective every 24-48 hours.
You can participate in the trial by using the tool at noswot.org
If you have any questions or don't see what you're looking for please make an issue or mention David Sterry on nostr.
Requirements
- Python 3.9+
- MariaDB or MySQL
- RabbitMQ:
sudo apt-get install rabbitmq-server
Installation
pip3 install -r requirements.txt
- Copy config/default_settings.py to config/settings.py and fill in parameters
- Create a database in mariadb then
mariadb -u dbuser -p {dbname} < schema.sql
Cron Scripts
- hoover.py - downloads and saves the day's worth of notes, contact lists, and reports
- process_profiles.py - parses out profile fields into
profile
table - update_contacts.py - saves follows to
contact
table, creating a follow log along the way - score.py - calculate trust scores
- npub_name_mapping.py - generates nn.js for front-end to replace known npubs with names
- preferred_relay.py - populates
profile
with each user's most commonly used relay and many more...
Celery Scripts
- runner.py - main celery process
- fill_in_profiles.py - sends profile grabber tasks to celery queue
Development
- Run
python3 hoover.py
to pull in recent nostr data Then process profiles and contacts with: time python process_profiles.py | sort | uniq -c
to get a count of errorspython update_contacts.py
- To collect missing profiles referred to in contact lists run celery:
celery -A runner worker --loglevel=DEBUG
- Then fire off profile grabbing tasks:
python fill_in_profiles.py
After running hoover, profiles, contacts, and score you can run python main.py
to start the Flask server.
Docker Compose
To run the project with docker-compose, you can use the following commands:
docker compose up -d
Open a shell in the web container to run the python scripts for example:
docker compose exec web ash
python hoover.py
Contributing
This project uses C4 (Collective Code Construction Contract) process for contributions.