Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Command line ping single url #39

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
74 changes: 70 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,20 +54,84 @@ PODPING_RUNAS_USER="podping" ./target/release/podping

## Blockchain Writer (hive-writer.py)

The python script `hive-writer.py` does the following:
The python script `hive-writer.py` sends podpings to Hive:

```
usage: hive-writer [options]

PodPing - Runs as a server and writes a stream of URLs to the Hive Blockchain or sends a single URL to Hive (--url option)

optional arguments:
-h, --help show this help message and exit
-q, --quiet Minimal output
-v, --verbose Lots of output
-s , --socket <port> Socket to listen on for each new url, returns either
-z , --zmq <port> for ZMQ to listen on for each new url, returns either
-u , --url <url> Takes in a single URL and sends a single podping to Hive, needs HIVE_SERVER_ACCOUNT and HIVE_POSTING_KEY ENV variables set
-e , --errors Deliberately force error rate of <int>%
```

There are two main modes of operation:
1. Run as a server waiting for URLs either as a simple socket (--socket) or as ZMQ socket (--zmq)
2. Send a single URL from the command line (--url)

### Enviornment

In order to operate, ```hive-watcher``` must be given two ENV variables. The third ENV variable will use a test version of Hive which may or may not be available:
In order to operate, ```hive-writer.py``` must be given two ENV variables. The third ENV variable will use a test version of Hive which may or may not be available:
```
"env": {
"HIVE_SERVER_ACCOUNT" : "blocktvnews",
"HIVE_POSTING_KEY": "5KRBCq3D7NiYH2E8AgshtthisisfakeJW4uCJWn8Qrpe9Rei2ZYx",
"USE_TEST_NODE": "False"
}
```

Hive accounts can be created with the tool [Hive On Board](https://hiveonboard.com?ref=brianoflondon). However, only *Podpings* from accounts which are approved by Podping and PodcastIndex will be recognised. The current authorised list can always be seen [here](https://peakd.com/@podping/following).

### Example

Send a single podping using account stored in ENV variable:

```python hive-writer/hive-writer.py --url http://feed.nashownotes.com/rss.xml ```

Output:
```
2021-05-24 10:42:18,258 INFO root MainThread : Podping startup sequence initiated, please stand by, full bozo checks in operation...
2021-05-24 10:42:19,962 INFO root MainThread : Startup of Podping status: SUCCESS! Hit the BOOST Button.
2021-05-24 10:42:19,962 INFO root MainThread : One URL Received: http://feed.nashownotes.com/rss.xml
2021-05-24 10:42:20,846 INFO root MainThread : Transaction sent: f91a73abd9905135ef4e1ed979cc20f184fbc72e - Num urls: 1 - Json size: 77
```

The transaction can be found on the Hive blockchain using the transaction number: [f91a73abd9905135ef4e1ed979cc20f184fbc72e](https://hiveblocks.com/tx/f91a73abd9905135ef4e1ed979cc20f184fbc72e)


Similarly, to run as a server:

```python hive-writer\hive-writer.py --zmq 9999```

This will initate a startup sequence which tests the ENV supplied credentials for the ability to write to Hive and makes a check on resource credits:

```
2021-05-24 11:29:49,495 INFO root MainThread : Podping startup sequence initiated, please stand by, full bozo checks in operation...
2021-05-24 11:29:51,594 INFO root MainThread : Testing Account Resource Credits - before 99.30%
2021-05-24 11:30:09,730 INFO root MainThread : Transaction sent: 9bfdac9088d75460bc9a560652eda2b86d2f49e9 - Num urls: 0 - Json size: 95
2021-05-24 11:30:09,730 INFO root MainThread : Testing Account Resource Credits.... 5s
2021-05-24 11:30:12,030 INFO root MainThread : Testing Account Resource Credits - after 99.29%
2021-05-24 11:30:12,030 INFO root MainThread : Capacity for further podpings : 28825.1
2021-05-24 11:30:28,903 INFO root MainThread : Transaction sent: e8573e27e02f561ea3e9f037fe6f7823f4445ecb - Num urls: 0 - Json size: 125
2021-05-24 11:30:28,903 INFO root MainThread : Startup of Podping status: SUCCESS! Hit the BOOST Button.
```

The line ```Capacity for further podpings : 28825.1``` gives a very rough indication of how many podpings this account can send in its present state.

This will start up and wait for a ZMQ connection on port 9999. The server waits for a single URL per connection return "OK" or "ERR" if something has failed. Every 3 seconds ```hive-writer.py``` will post the URLs received (up to a maximum of 130) to the Hive blockchain. If URLs come in faster than 130 every 3s they will be held in a buffer and written out.



<br>

## Blockchain Watcher (hive-watcher.py)

The stream of *podpings* can be watched with the ```hive-watcher.py``` code. In addition there is a simplified version of this code ```simple-watcher.py``` which should be used to understand what is going on. There is javascript version in [hive-watcher.js](https://github.com/Podcastindex-org/podping.cloud/blob/main/hive-watcher-js/hive-watcher.js)


Expand All @@ -81,6 +145,7 @@ With default arguments it will print to the StdOut a log of each new URL that ha
optional arguments:
-h, --help show this help message and exit
-H, --history-only Report history only and exit
-d, --diagnostic Show diagnostic posts written to the blockchain
-r , --reports Time in MINUTES between periodic status reports, use 0 for no periodic reports
-s , --socket <IP-Address>:<port> Socket to send each new url to
-t, --test Use a test net API
Expand All @@ -89,6 +154,7 @@ optional arguments:

-b , --block Hive Block number to start replay at or use:
-o , --old Time in HOURS to look back up the chain for old pings (default is 0)
-y , --startdate <%Y-%m-%d %H:%M:%S> Date/Time to start the history
```
### What it does

Expand Down Expand Up @@ -145,7 +211,7 @@ The write operation usually takes about 0.8s. At present ```hive-writer``` is no

## Blockchain Watcher (hive-watcher.py)

The watcher script is how you see which podcast feed urls have signaled an update.
The watcher script is how you see which podcast feed urls have signaled an update.

The python script `hive-watcher.py` is more full featured - allowing for socket listening, and other options.

Expand All @@ -164,7 +230,7 @@ This is the easiest way to get started watching the blockchain for feed updates.
Each time a feed update notification is detected on the blockchain, the full url of the feed is printed to STDOUT on a new line. Each
FQDN that is output represents a new episode that has been published, or some other significant update to that podcast feed.

You can watch this output as a way to signal your system to re-parse a podcast feed. Or you can use it as a starting template to
You can watch this output as a way to signal your system to re-parse a podcast feed. Or you can use it as a starting template to
develop a more customized script for your environment. It's dead simple!

<br>
Expand Down
29 changes: 24 additions & 5 deletions hive-watcher/hive-watcher.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,10 @@
import json
import logging
import os
from datetime import datetime, timedelta
from datetime import date, datetime, timedelta
from ipaddress import IPv4Address, IPv6Address, AddressValueError
from socket import AF_INET, SOCK_STREAM, socket
from time import strptime
from typing import Set, Optional, Union

import beem
Expand Down Expand Up @@ -65,6 +66,18 @@ class Pings:
help="Time in HOURS to look back up the chain for old pings (default is 0)",
)

block_history_argument_group.add_argument(
"-y",
"--startdate",
action="store",
type=str,
required=False,
metavar="",
default=0,
help="<%%Y-%%m-%%d %%H:%%M:%%S> Date/Time to start the history",
)


my_parser.add_argument(
"-H",
"--history-only",
Expand Down Expand Up @@ -143,8 +156,8 @@ def output(post, quiet=False, use_test_node=False, diagnostic=False) -> int:
data = json.loads(post.get("json"))
if diagnostic:
logging.info(
f"Diagnositc - {post.get('timestamp')} "
f"- {post.get('trx_id')} - {post.get('message')}"
f"Diagnostic - {post.get('timestamp')} "
f"- {data.get('server_account')} - {post.get('trx_id')} - {data.get('message')}"
)
logging.info(
json.dumps(data, indent=2)
Expand Down Expand Up @@ -462,7 +475,7 @@ def main() -> None:
logging.info("---------------> Using Main Hive Chain ")

# scan_history will look back over the last 1 hour reporting every 15 minute chunk
if my_args["old"] or my_args["block"]:
if my_args["old"] or my_args["block"] or my_args["startdate"]:
report_minutes = my_args["reports"]
if my_args["block"]:
block_num = my_args["block"]
Expand All @@ -475,7 +488,13 @@ def main() -> None:
diagnostic=diagnostic,
)
else:
hours_ago = timedelta(hours=my_args["old"])
if my_args["startdate"]:
arg_time = my_args["startdate"]
start_date = datetime.strptime(arg_time, "%Y-%m-%d %H:%M:%S")
hours_ago = datetime.now() - start_date
else:
hours_ago = timedelta(hours=my_args["old"])

scan_history(
hive,
hours_ago=hours_ago,
Expand Down
2 changes: 1 addition & 1 deletion hive-writer/hive-writer-test.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ def loop_test():
""" Run a simple loop test on the hive-writer program """
start = time.perf_counter()
# Do 10 requests, waiting each time for a response
for request in range(2):
for request in range(7):
print(f"Sending request {request} …")
data = f"https://www.brianoflondon.me/podcast2/brians-forest-talks-exp.xml?q={request}"
zsocket.send(data.encode())
Expand Down
111 changes: 70 additions & 41 deletions hive-writer/hive-writer.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@
# COMMAND LINE
# ---------------------------------------------------------------

app_description = """ PodPing - Runs as a server and writes a stream of URLs to the Hive Blockchain """
app_description = """ PodPing - Runs as a server and writes a stream of URLs to the Hive Blockchain or sends a single URL to Hive (--url option) """


my_parser = argparse.ArgumentParser(prog='hive-writer',
Expand All @@ -60,17 +60,28 @@
group_noise.add_argument('-q', '--quiet', action='store_true', help='Minimal output')
group_noise.add_argument('-v', '--verbose', action='store_true', help='Lots of output')

my_parser.add_argument('-s', '--socket',

group_action_type = my_parser.add_mutually_exclusive_group()
group_action_type.add_argument('-s', '--socket',
action='store', type=int, required=False,
metavar='',
default= None,
help='<port> Socket to listen on for each new url, returns either ')
my_parser.add_argument('-z', '--zmq',
group_action_type.add_argument('-z', '--zmq',
action='store', type=int, required=False,
metavar='',
default= None,
help='<port> for ZMQ to listen on for each new url, returns either ')

group_action_type.add_argument('-u', '--url',
action='store',
type=str,
required=False,
metavar='',
default=None,
help="<url> Takes in a single URL and sends a single podping to Hive, needs HIVE_SERVER_ACCOUNT and HIVE_POSTING_KEY ENV variables set")


my_parser.add_argument('-e', '--errors',
action='store', type=int, required=False,
metavar='',
Expand All @@ -90,9 +101,10 @@
# Adding a Queue system to the Hive send_notification section
hive_q = queue.Queue()

def startup_sequence(ignore_errors= False) -> bool:
def startup_sequence(ignore_errors= False, resource_test=True) -> bool:
""" Run though a startup sequence connect to Hive and check env variables
Exit with error unless ignore_errors passed as True """
Exit with error unless ignore_errors passed as True
Defaults to sending two startup resource_test posts and checking resources """

global hive
global server_account, wif
Expand Down Expand Up @@ -138,39 +150,41 @@ def startup_sequence(ignore_errors= False) -> bool:
error_messages.append(f'{ex} occurred {ex.__class__}')
logging.error(error_messages[-1])

if acc:
try: # Now post two custom json to test.
manabar = acc.get_rc_manabar()
logging.info(f'Testing Account Resource Credits - before {manabar.get("current_pct"):.2f}%')
custom_json = {
"server_account" : server_account,
"USE_TEST_NODE" : USE_TEST_NODE,
"message" : "Podping startup initiated"
}
error_message , success = send_notification(custom_json, 'podping-startup')

if not success:
error_messages.append(error_message)
logging.info('Testing Account Resource Credits.... 5s')
time.sleep(2)
manabar_after = acc.get_rc_manabar()
logging.info(f'Testing Account Resource Credits - after {manabar_after.get("current_pct"):.2f}%')
cost = manabar.get('current_mana') - manabar_after.get('current_mana')
if cost == 0: # skip this test if we're going to get ZeroDivision
capacity = 1000000
else:
capacity = manabar_after.get('current_mana') / cost
logging.info(f'Capacity for further podpings : {capacity:.1f}')
custom_json['v'] = CURRENT_PODPING_VERSION
custom_json['capacity'] = f'{capacity:.1f}'
custom_json['message'] = 'Podping startup complete'
error_message , success = send_notification(custom_json, 'podping-startup')
if not success:
error_messages.append(error_message)

except Exception as ex:
error_messages.append(f'{ex} occurred {ex.__class__}')
logging.error(error_messages[-1])

if resource_test:
if acc:
try: # Now post two custom json to test.
manabar = acc.get_rc_manabar()
logging.info(f'Testing Account Resource Credits - before {manabar.get("current_pct"):.2f}%')
custom_json = {
"server_account" : server_account,
"USE_TEST_NODE" : USE_TEST_NODE,
"message" : "Podping startup initiated"
}
error_message , success = send_notification(custom_json, 'podping-startup')

if not success:
error_messages.append(error_message)
logging.info('Testing Account Resource Credits.... 5s')
time.sleep(2)
manabar_after = acc.get_rc_manabar()
logging.info(f'Testing Account Resource Credits - after {manabar_after.get("current_pct"):.2f}%')
cost = manabar.get('current_mana') - manabar_after.get('current_mana')
if cost == 0: # skip this test if we're going to get ZeroDivision
capacity = 1000000
else:
capacity = manabar_after.get('current_mana') / cost
logging.info(f'Capacity for further podpings : {capacity:.1f}')
custom_json['v'] = CURRENT_PODPING_VERSION
custom_json['capacity'] = f'{capacity:.1f}'
custom_json['message'] = 'Podping startup complete'
error_message , success = send_notification(custom_json, 'podping-startup')
if not success:
error_messages.append(error_message)

except Exception as ex:
error_messages.append(f'{ex} occurred {ex.__class__}')
logging.error(error_messages[-1])


if error_messages:
Expand Down Expand Up @@ -334,11 +348,16 @@ def failure_retry(url_list, failure_count = 0):
""" Recursion... see recursion """
global peak_fail_count
if failure_count > 0:
logging.error(f"Waiting {HALT_TIME[failure_count]}")
logging.error(f"Waiting {HALT_TIME[failure_count]}s")
time.sleep(HALT_TIME[failure_count])
logging.info(f"RETRYING num_urls: {len(url_list)}")
else:
logging.info(f"Received num_urls: {len(url_list)}")
if type(url_list) == list:
logging.info(f"Received num_urls: {len(url_list)}")
elif type(url_list) == str:
logging.info(f"One URL Received: {url_list}")
else:
logging.info(f"{url_list}")

trx_id, success = send_notification(url_list)
# Send reply back to client
Expand Down Expand Up @@ -372,7 +391,17 @@ def failure_retry(url_list, failure_count = 0):

def main() -> None:
""" Main man what counts... """
startup_sequence()
if myArgs['url']:
url = myArgs['url']
if startup_sequence(resource_test=False):
answer, failure_count = failure_retry(url)
return
else:
raise(SystemExit)
return


startup_sequence(resource_test=True)
if myArgs['socket']:
HOST, PORT = "localhost", myArgs['socket']
# Create the server, binding to localhost on port 9999
Expand Down