Skip to content

SaPeresi/spark-poc

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

spark-poc

This project will test intersections and interactions between GenevaERS and Apache Spark. When complete, this repo will include the following:

  • An overview of the POC (here in this document)
  • Source data assets, including links to publicly available data files and scripts to convert them into appropriate formats
  • Spark code to produce a baseline output
  • Spark code leveraging jzos, the z/OS java IO routines
  • Spark code leveraging genlib, GenevaERS encapsulated utlities to perform GenevaERS function in a stand alone mode
  • GenevaERS XML defining GenevaERS processes to produce the same outputs.

This project has been built with the hope that others can download, run, and analyze the outputs. This will give a sense of GenevaERS capabilities, and how they compare to the more well-known Apache Spark capabilities.

Here is the framework being considered for the next generation GenevaERS. Slide3

This outlines the Spark POC approach Slide4

And this starts to describe what will be learned from this POC GenevaERS and Spark POC

More information about the initiative can be found at the GenevaERS.org website activity entry.

The plan for the POC is as follows:

Week Ending-Tasks to be complete as of end of week

8/4/20

[ ] Maven build of Spark jzos on z/OS

[ ] Initial design of outputs

[ ] Data design complete, ASCII, ftp, zip, etc.

8/11/20

[ ] Have UR20 working under Spark

[ ] Final design of outputs to be produced

[ ] Sample data ready for use

8/18/20

[ ] Spark code complete

[ ] UR45 conversion complete

[ ] Full data conversion with partiitioning

[ ] GenevaERS views built and run

8/25/20

[ ] Initial runs on cloud, and z/OS

[ ] Repartitioning, performance tuning

[ ] Execution of following configs - [ ] Spark on Cloud - [ ] Spark/Jzos - [ ] Spark UR20 - [ ] Spark UR45 - [ ] GenevaERS

9/1/20

[ ] Execution, tuning, testing, rerun

9/8/20

[ ] End of technical work, build presentation

9/15/20

[ ] OMP Presentation

Data files will be created by: [ ] Scripts in repo with addin python program will create partitioned datasets with a serial number on the data for partitioning. [ ] These files will be ascii utf-8 and possibly tarfiles to be uploadable to mainframe for further testing.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Shell 100.0%