- If you an encounter an error, while running the notebook-
Wait for about an hour and run from the point of error again.
- If there is an issue in the data being saved back from the SPSS results to DB2 Warehouse-
- Double click on the
export
node and click onChange Data Asset
and ensure you have kept theReplace existing dataset
option.
- Select some existing
csv
dataset and run the modeler flow.
NOTE: THIS WILL REPLACE THE DATA ASSET. BE SURE TO MAKE A COPY BEFORE PERFORMING THE STEPS.
- Now, open the notebook in your project and move to the last cell. Click on the
10/01
tab on left-hand top corner. Select the saved dataset and click onInsert Pandas Dataframe
.
- Run the above mentioned cell and ensure the Dataframe is named -
df_data_1
. - Create a new cell by clicking on the
+
button on the top right corner, and add the below code to that cell-
tuple_of_tuples = tuple([tuple(x) for x in df_data_1.values])
i=1
for x in df_data_1.values:
#print((tuple(x)))
vals= (i,) + tuple(x)
#print(vals)
#print(x[0],x[1],x[2],x[3],x[4],x[5],x[6],x[7],x[8],x[9],x[10],x[11],x[12])
sql = "INSERT INTO DASHXXXX.DATA_FOR_COGNOS VALUES"+ str(vals)
i=i+1
ins_sql=ibm_db.prepare(conn, sql)
ibm_db.execute(ins_sql)
Before replacing the name, be sure to get the Schema name of your DB2 Warehouse and replace it with DASHXXXX
.
You can find your Schema name in your DB2 Warehouse instance when you open the table like DASHXXXX.Table_Name
-