#PYTHON #DATABASE #MYSQL #VERTICA #PARQUST #CSV
def to_csv(engine,
sql_query: str,
file_name: os.PathLike,
compression=CSV_COMPRESSION_GZIP,
func_print: Callable = print) -> int
SQL Query Statemet to CSV format file and compression data.
Arguments:
engine
Connection - Connection Database And SQLAlchemy.Enginesql_query
str - SQL Query Statement (SELECT Only)file_name
os.PathLike - save with filename and extention file (Example: ./mycsv.csv.gz
)compression
str, optional - Compression file type (plain|gzip|zip
). Defaults to CSV_COMPRESSION_GZIP.func_print
Callable, optional - Callback Print Massage function . Defaults to print.Raises:
ex
- Errror HandlerReturns:
int
- Total count record data.def read_csv(filename: os.PathLike, **pandas_option) -> pd.DataFrame
Read Csv file
pandas option: https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html
Arguments:
filename
os.PathLike - file name os.PathLike
Returns:
pd.DataFrame
- pandas DataFramedef head_csv(filename: os.PathLike,
nrows: int = 10,
**pandas_option) -> pd.DataFrame
Read Head record in csv file
Arguments:
filename
os.PathLike - filenamenrows
int, optional - number rows. Defaults to 10.Returns:
pd.DataFrame
- pandas DataFramedef batch_csv(filename: os.PathLike,
batch_size: int = 10000,
**pandas_option) -> Iterator[pd.DataFrame]
Read CSV file for iteration object
Arguments:
filename
os.PathLike - filenamebatch_size
int, optional - batch_size or chunksize row number. Defaults to 10000.Yields:
Iterator[pd.DataFrame]
- Return Iterator[pd.DataFrame]