当前位置:首页 » 《随便一记》 » 正文

python:pandas模块最全最详细的教程

20 人参与  2023年03月29日 10:57  分类 : 《随便一记》  评论

点击全文阅读


pandas模块介绍

pandas官方文档:https://pandas.pydata.org/pandas-docs/stable/?v=20190307135750

pandas基于Numpy,可以看成是处理文本或者表格数据。pandas中有两个主要的数据结构,其中Series数据结构类似于Numpy中的一维数组,DataFrame类似于多维表格数据结构。

pandas是python数据分析的核心模块。它主要提供了五大功能:

支持文件存取操作,支持数据库(sql)、html、json、pickle、csv(txt、excel)、sas、stata、hdf等。支持增删改查、切片、高阶函数、分组聚合等单表操作,以及和dict、list的互相转换。支持多表拼接合并操作。支持简单的绘图操作。支持简单的统计分析操作。

一、Series数据结构

Series是一种类似于一维数组的对象,由一组数据和一组与之相关的数据标签(索引)组成。

Series比较像列表(数组)和字典的结合体

import numpy as npimport pandas as pddf = pd.Series(0, index=['a', 'b', 'c', 'd'])print(df)a    0b    0c    0d    0dtype: int64    print(df.values)# [0 0 0 0]print(df.index)# Index(['a', 'b', 'c', 'd'], dtype='object')

1 Series支持NumPy模块的特性(下标)

详解方法
从ndarray创建SeriesSeries(arr)
与标量运算df*2
两个Series运算df1+df2
索引df[0], df[[1,2,4]]
切片df[0:2]
通用函数np.abs(df)
布尔值过滤df[df>0]
arr = np.array([1, 2, 3, 4, np.nan])print(arr)# [ 1.  2.  3.  4. nan]df = pd.Series(arr, index=['a', 'b', 'c', 'd', 'e'])print(df)# 结果:    a    1.0    b    2.0    c    3.0    d    4.0    e    NaNdtype: float64        print(df**2)# 结果:    a     1.0    b     4.0    c     9.0    d    16.0    e     NaN    dtype: float64        print(df[0])# 1.0print(df['a'])# 1.0print(df[[0, 1, 2]])# 结果:    a    1.0    b    2.0    c    3.0    dtype: float64        print(df[0:2])# 结果:    a    1.0    b    2.0    dtype: float64        np.sin(df)# 结果:    a    0.841471    b    0.909297    c    0.141120    d   -0.756802    e         NaN    dtype: float64        df[df > 1]# 结果:    b    2.0    c    3.0    d    4.0    dtype: float64

2 Series支持字典的特性(标签)

详解方法
从字典创建SeriesSeries(dic),
in运算’a’ in sr
键索引sr[‘a’], sr[[‘a’, ‘b’, ‘d’]]
df = pd.Series({'a': 1, 'b': 2})print(df)# 结果:    a    1    b    2    dtype: int64        print('a' in df)# Trueprint(df['a'])# 1

3 Series缺失数据处理

方法详解
dropna()过滤掉值为NaN的行
fillna()填充缺失数据
isnull()返回布尔数组,缺失值对应为True
notnull()返回布尔数组,缺失值对应为False
df = pd.Series([1, 2, 3, 4, np.nan], index=['a', 'b', 'c', 'd', 'e'])print(df)# 结果:    a    1.0    b    2.0    c    3.0    d    4.0    e    NaN    dtype: float64        print(df.dropna())# 结果:    a    1.0    b    2.0    c    3.0    d    4.0    dtype: float64print(df.fillna(5))# 结果:    a    1.0    b    2.0    c    3.0    d    4.0    e    5.0    dtype: float64print(df.isnull())# 结果    a    False    b    False    c    False    d    False    e     True    dtype: bool    print(df.notnull())# 结果:    a     True    b     True    c     True    d     True    e    False    dtype: bool

二、DataFrame数据结构

DataFrame是一个表格型的数据结构,含有一组有序的列。

DataFrame可以被看做是由Series组成的字典,并且共用一个索引。

1 产生时间对象数组:date_range

date_range参数详解:

参数详解
start开始时间
end结束时间
periods时间长度
freq时间频率,默认为’D’,可选H(our),W(eek),B(usiness),S(emi-)M(onth),(min)T(es), S(econd), A(year),…
dates = pd.date_range('20190101', periods=6, freq='M')print(dates)# 结果:DatetimeIndex(['2019-01-31', '2019-02-28', '2019-03-31', '2019-04-30',               '2019-05-31', '2019-06-30'],              dtype='datetime64[ns]', freq='M')np.random.seed(1)arr = 10 * np.random.randn(6, 4)print(arr)# 结果:[[ 16.24345364  -6.11756414  -5.28171752 -10.72968622] [  8.65407629 -23.01538697  17.44811764  -7.61206901] [  3.19039096  -2.49370375  14.62107937 -20.60140709] [ -3.22417204  -3.84054355  11.33769442 -10.99891267] [ -1.72428208  -8.77858418   0.42213747   5.82815214] [-11.00619177  11.4472371    9.01590721   5.02494339]]df = pd.DataFrame(arr, index=dates, columns=['c1', 'c2', 'c3', 'c4'])df
空格c1c2c3c4
2019-01-3116.243454-6.117564-5.281718-10.729686
2019-02-288.654076-23.01538717.448118-7.612069
2019-03-313.190391-2.49370414.621079-20.601407
2019-04-30-3.224172-3.84054411.337694-10.998913
2019-05-31-1.724282-8.7785840.4221375.828152
2019-06-30-11.00619211.4472379.0159075.024943

三、DataFrame属性

属性详解
dtype是查看数据类型
index查看行序列或者索引
columns查看各列的标签
values查看数据框内的数据,也即不含表头索引的数据
describe查看数据每一列的极值,均值,中位数,只可用于数值型数据
transpose转置,也可用T来操作
sort_index排序,可按行或列index排序输出
sort_values按数据值来排序
# 查看数据类型print(df2.dtypes)# 结果:    0    float64    1    float64    2    float64    3    float64    dtype: object               df
空格c1c2c3c4
2019-01-3116.243454-6.117564-5.281718-10.729686
2019-02-288.654076-23.01538717.448118-7.612069
2019-03-313.190391-2.49370414.621079-20.601407
2019-04-30-3.224172-3.84054411.337694-10.998913
2019-05-31-1.724282-8.7785840.4221375.828152
2019-06-30-11.00619211.4472379.0159075.024943
print(df.index)# 结果:DatetimeIndex(['2019-01-31', '2019-02-28', '2019-03-31', '2019-04-30',               '2019-05-31', '2019-06-30'],              dtype='datetime64[ns]', freq='M')print(df.columns)# 结果:Index(['c1', 'c2', 'c3', 'c4'], dtype='object')print(df.values)# 结果:[[ 16.24345364  -6.11756414  -5.28171752 -10.72968622] [  8.65407629 -23.01538697  17.44811764  -7.61206901] [  3.19039096  -2.49370375  14.62107937 -20.60140709] [ -3.22417204  -3.84054355  11.33769442 -10.99891267] [ -1.72428208  -8.77858418   0.42213747   5.82815214] [-11.00619177  11.4472371    9.01590721   5.02494339]]df.describe()
空格c1c2c3c4
count6.0000006.0000006.0000006.000000
mean2.022213-5.4664247.927203-6.514830
std9.58008411.1077728.70717110.227641
min-11.006192-23.015387-5.281718-20.601407
25%-2.849200-8.1133292.570580-10.931606
50%0.733054-4.97905410.176801-9.170878
75%7.288155-2.83041413.8002331.865690
max16.24345411.44723717.4481185.828152
df.T
空格2019-01-31 00:00:002019-02-28 00:00:002019-03-31 00:00:002019-04-30 00:00:002019-05-31 00:00:002019-06-30 00:00:00
c116.2434548.6540763.190391-3.224172-1.724282-11.006192
c2-6.117564-23.015387-2.493704-3.840544-8.77858411.447237
c3-5.28171817.44811814.62107911.3376940.4221379.015907
c4-10.729686-7.612069-20.601407-10.9989135.8281525.024943
# 按行标签[c1, c2, c3, c4]从大到小排序df.sort_index(axis=0)
空格c1c2c3c4
2019-01-3116.243454-6.117564-5.281718-10.729686
2019-02-288.654076-23.01538717.448118-7.612069
2019-03-313.190391-2.49370414.621079-20.601407
2019-04-30-3.224172-3.84054411.337694-10.998913
2019-05-31-1.724282-8.7785840.4221375.828152
2019-06-30-11.00619211.4472379.0159075.024943
# 按列标签[2019-01-01, 2019-01-02...]从大到小排序df.sort_index(axis=1)
空格c1c2c3c4
2019-01-3116.243454-6.117564-5.281718-10.729686
2019-02-288.654076-23.01538717.448118-7.612069
2019-03-313.190391-2.49370414.621079-20.601407
2019-04-30-3.224172-3.84054411.337694-10.998913
2019-05-31-1.724282-8.7785840.4221375.828152
2019-06-30-11.00619211.4472379.0159075.024943
# 按c2列的值从大到小排序df.sort_values(by='c2')
空格c1c2c3c4
2019-02-288.654076-23.01538717.448118-7.612069
2019-05-31-1.724282-8.7785840.4221375.828152
2019-01-3116.243454-6.117564-5.281718-10.729686
2019-04-30-3.224172-3.84054411.337694-10.998913
2019-03-313.190391-2.49370414.621079-20.601407
2019-06-30-11.00619211.4472379.0159075.024943

四、DataFrame取值

df
空格c1c2c3c4
2019-01-3116.243454-6.117564-5.281718-10.729686
2019-02-288.654076-23.01538717.448118-7.612069
2019-03-313.190391-2.49370414.621079-20.601407
2019-04-30-3.224172-3.84054411.337694-10.998913
2019-05-31-1.724282-8.7785840.4221375.828152
2019-06-30-11.00619211.4472379.0159075.024943

1 通过columns取值

df['c2']# 结果:    2019-01-31    -6.117564    2019-02-28   -23.015387    2019-03-31    -2.493704    2019-04-30    -3.840544    2019-05-31    -8.778584    2019-06-30    11.447237    Freq: M, Name: c2, dtype: float64                df[['c2', 'c3']]
空格c2c3
2019-01-31-6.117564-5.281718
2019-02-28-23.01538717.448118
2019-03-31-2.49370414.621079
2019-04-30-3.84054411.337694
2019-05-31-8.7785840.422137
2019-06-3011.4472379.015907

2 loc(通过行标签取值)

# 通过自定义的行标签选择数据df.loc['2019-01-01':'2019-01-03']
空格c1c2c3c4
df[0:3]
空格c1c2c3c4
2019-01-3116.243454-6.117564-5.281718-10.729686
2019-02-288.654076-23.01538717.448118-7.612069
2019-03-313.190391-2.49370414.621079-20.601407

3 iloc(类似于numpy数组取值)

df.values# 结果:    array([[ 16.24345364,  -6.11756414,  -5.28171752, -10.72968622],           [  8.65407629, -23.01538697,  17.44811764,  -7.61206901],           [  3.19039096,  -2.49370375,  14.62107937, -20.60140709],           [ -3.22417204,  -3.84054355,  11.33769442, -10.99891267],           [ -1.72428208,  -8.77858418,   0.42213747,   5.82815214],           [-11.00619177,  11.4472371 ,   9.01590721,   5.02494339]])# 通过行索引选择数据print(df.iloc[2, 1])# -2.493703754774101df.iloc[1:4, 1:4]
空格c2c3c4
2019-02-28-23.01538717.448118-7.612069
2019-03-31-2.49370414.621079-20.601407
2019-04-30-3.84054411.337694-10.998913

4 使用逻辑判断取值

df[df['c1'] > 0]
空格c1c2c3c4
2019-01-3116.243454-6.117564-5.281718-10.729686
2019-02-288.654076-23.01538717.448118-7.612069
2019-03-313.190391-2.49370414.621079-20.601407
df[(df['c1'] > 0) & (df['c2'] > -8)]
空格c1c2c3c4
2019-01-3116.243454-6.117564-5.281718-10.729686
2019-03-313.190391-2.49370414.621079-20.601407

五、DataFrame值替换

df
空格c1c2c3c4
2019-01-3116.243454-6.117564-5.281718-10.729686
2019-02-288.654076-23.01538717.448118-7.612069
2019-03-313.190391-2.49370414.621079-20.601407
2019-04-30-3.224172-3.84054411.337694-10.998913
2019-05-31-1.724282-8.7785840.4221375.828152
2019-06-30-11.00619211.4472379.0159075.024943
df.iloc[0:3, 0:2] = 0df
空格c1c2c3c4
2019-01-310.0000000.000000-5.281718-10.729686
2019-02-280.0000000.00000017.448118-7.612069
2019-03-310.0000000.00000014.621079-20.601407
2019-04-30-3.224172-3.84054411.337694-10.998913
2019-05-31-1.724282-8.7785840.4221375.828152
2019-06-30-11.00619211.4472379.0159075.024943
df['c3'] > 10# 结果:    2019-01-31    False    2019-02-28     True    2019-03-31     True    2019-04-30     True    2019-05-31    False    2019-06-30    False    Freq: M, Name: c3, dtype: bool                # 针对行做处理df[df['c3'] > 10] = 100df
空格c1c2c3c4
2019-01-310.0000000.000000-5.281718-10.729686
2019-02-28100.000000100.000000100.000000100.000000
2019-03-31100.000000100.000000100.000000100.000000
2019-04-30100.000000100.000000100.000000100.000000
2019-05-31-1.724282-8.7785840.4221375.828152
2019-06-30-11.00619211.4472379.0159075.024943
# 针对行做处理df = df.astype(np.int32)df[df['c3'].isin([100])] = 1000df
空格c1c2c3c4
2019-01-3100-5-10
2019-02-281000100010001000
2019-03-311000100010001000
2019-04-301000100010001000
2019-05-31-1-805
2019-06-30-111195

六、读取CSV文件

import pandas as pdfrom io import StringIOtest_data = '''5.1,,1.4,0.24.9,3.0,1.4,0.24.7,3.2,,0.27.0,3.2,4.7,1.46.4,3.2,4.5,1.56.9,3.1,4.9,,,,'''test_data = StringIO(test_data)df = pd.read_csv(test_data, header=None)df.columns = ['c1', 'c2', 'c3', 'c4']df
空格c1c2c3c4
05.1NaN1.40.2
14.93.01.40.2
24.73.2NaN0.2
37.03.24.71.4
46.43.24.51.5
56.93.14.9NaN
6NaNNaNNaNNaN

七、处理丢失数据

df.isnull()
空格c1c2c3c4
0FalseTrueFalseFalse
1FalseFalseFalseFalse
2FalseFalseTrueFalse
3FalseFalseFalseFalse
4FalseFalseFalseFalse
5FalseFalseFalseTrue
6TrueTrueTrueTrue
# 通过在isnull()方法后使用sum()方法即可获得该数据集某个特征含有多少个缺失值print(df.isnull().sum())# 结果:    c1    1    c2    2    c3    2    c4    2    dtype: int64# Python学习交流群:711312441        # axis=0删除有NaN值的行df.dropna(axis=0)
空格c1c2c3c4
14.93.01.40.2
37.03.24.71.4
46.43.24.51.5
# axis=1删除有NaN值的列df.dropna(axis=1)

在这里插入图片描述

# 删除全为NaN值得行或列df.dropna(how='all')
空格c1c2c3c4
05.1NaN1.40.2
14.93.01.40.2
24.73.2NaN0.2
37.03.24.71.4
46.43.24.51.5
56.93.14.9NaN
# 删除行不为4个值的df.dropna(thresh=4)
空格c1c2c3c4
14.93.01.40.2
37.03.24.71.4
46.43.24.51.5
# 删除c2中有NaN值的行df.dropna(subset=['c2'])
空格c1c2c3c4
14.93.01.40.2
24.73.2NaN0.2
37.03.24.71.4
46.43.24.51.5
56.93.14.9NaN
# 填充nan值df.fillna(value=10)
空格c1c2c3c4
05.110.01.40.2
14.93.01.40.2
24.73.210.00.2
37.03.24.71.4
46.43.24.51.5
56.93.14.910.0
610.010.010.010.0

八、合并数据

df1 = pd.DataFrame(np.zeros((3, 4)))df1
空格0123
00.00.00.00.0
10.00.00.00.0
20.00.00.00.0
df2 = pd.DataFrame(np.ones((3, 4)))df2
空格0123
01.01.01.01.0
11.01.01.01.0
21.01.01.01.0
# axis=0合并列pd.concat((df1, df2), axis=0)
空格0123
00.00.00.00.0
10.00.00.00.0
20.00.00.00.0
01.01.01.01.0
11.01.01.01.0
21.01.01.01.0
# axis=1合并行pd.concat((df1, df2), axis=1)0 1 2 3 0 1 2 30 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.01 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.02 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0# append只能合并列df1.append(df2)
空格0123
00.00.00.00.0
10.00.00.00.0
20.00.00.00.0
01.01.01.01.0
11.01.01.01.0
21.01.01.01.0

九、导入导出数据

使用df = pd.read_excel(filename)读取文件,使用df.to_excel(filename)保存文件。

1 读取文件导入数据

读取文件导入数据函数主要参数:

参数详解
sep指定分隔符,可用正则表达式如’\s+’
header=None指定文件无行名
name指定列名
index_col指定某列作为索引
skip_row指定跳过某些行
na_values指定某些字符串表示缺失值
parse_dates指定某些列是否被解析为日期,布尔值或列表
df = pd.read_excel(filename)df = pd.read_csv(filename)

2 写入文件导出数据

写入文件函数的主要参数:

参数详解
sep分隔符
na_rep指定缺失值转换的字符串,默认为空字符串
header=False不保存列名
index=False不保存行索引
cols指定输出的列,传入列表
df.to_excel(filename)

十、pandas读取json文件

strtext = '[{"ttery":"min","issue":"20130801-3391","code":"8,4,5,2,9","code1":"297734529","code2":null,"time":1013395466000},\{"ttery":"min","issue":"20130801-3390","code":"7,8,2,1,2","code1":"298058212","code2":null,"time":1013395406000},\{"ttery":"min","issue":"20130801-3389","code":"5,9,1,2,9","code1":"298329129","code2":null,"time":1013395346000},\{"ttery":"min","issue":"20130801-3388","code":"3,8,7,3,3","code1":"298588733","code2":null,"time":1013395286000},\{"ttery":"min","issue":"20130801-3387","code":"0,8,5,2,7","code1":"298818527","code2":null,"time":1013395226000}]'#Python学习交流群:711312441df = pd.read_json(strtext, orient='records')df
空格codecode1code2issuetimettery
08,4,5,2,9297734529NaN20130801-33911013395466000min
17,8,2,1,2298058212NaN20130801-33901013395406000min
25,9,1,2,9298329129NaN20130801-33891013395346000min
33,8,7,3,3298588733NaN20130801-33881013395286000min
40,8,5,2,7298818527NaN20130801-33871013395226000min
df.to_excel('pandas处理json.xlsx',            index=False,            columns=["ttery", "issue", "code", "code1", "code2", "time"])

1 orient参数的五种形式

orient是表明预期的json字符串格式。orient的设置有以下五个值:

‘split’ : dict like {index -> [index], columns -> [columns], data -> [values]}

这种就是有索引,有列字段,和数据矩阵构成的json格式。key名称只能是index,columns和data。

s = '{"index":[1,2,3],"columns":["a","b"],"data":[[1,3],[2,8],[3,9]]}'df = pd.read_json(s, orient='split')df
空格ab
113
228
339

‘records’ : list like [{column -> value}, … , {column -> value}]

这种就是成员为字典的列表。如我今天要处理的json数据示例所见。构成是列字段为键,值为键值,每一个字典成员就构成了dataframe的一行数据。

strtext = '[{"ttery":"min","issue":"20130801-3391","code":"8,4,5,2,9","code1":"297734529","code2":null,"time":1013395466000},\{"ttery":"min","issue":"20130801-3390","code":"7,8,2,1,2","code1":"298058212","code2":null,"time":1013395406000}]'df = pd.read_json(strtext, orient='records')df
空格codecode1code2issuetimettery
08,4,5,2,9297734529NaN20130801-33911013395466000min
17,8,2,1,2298058212NaN20130801-33901013395406000min

‘index’ : dict like {index -> {column -> value}}

以索引为key,以列字段构成的字典为键值。如:

s = '{"0":{"a":1,"b":2},"1":{"a":9,"b":11}}'df = pd.read_json(s, orient='index')df
空格ab
012
1911

‘columns’ : dict like {column -> {index -> value}}

这种处理的就是以列为键,对应一个值字典的对象。这个字典对象以索引为键,以值为键值构成的json字符串。如下图所示:

s = '{"a":{"0":1,"1":9},"b":{"0":2,"1":11}}'df = pd.read_json(s, orient='columns')df
空格ab
012
1911

‘values’ : just the values array。

values这种我们就很常见了。就是一个嵌套的列表。里面的成员也是列表,2层的。

s = '[["a",1],["b",2]]'df = pd.read_json(s, orient='values')df
空格01
0a1
1b2

十一、pandas读取sql语句

import numpy as npimport pandas as pdimport pymysqldef conn(sql):    # 连接到mysql数据库    conn = pymysql.connect(        host="localhost",        port=3306,        user="root",        passwd="123",        db="db1",    )    try:        data = pd.read_sql(sql, con=conn)        return data    except Exception as e:        print("SQL is not correct!")    finally:        conn.close()sql = "select * from test1 limit 0, 10"  # sql语句data = conn(sql)print(data.columns.tolist())  # 查看字段print(data)  # 查看数据

点击全文阅读


本文链接:http://zhangshiyu.com/post/56994.html

<< 上一篇 下一篇 >>

  • 评论(0)
  • 赞助本站

◎欢迎参与讨论,请在这里发表您的看法、交流您的观点。

关于我们 | 我要投稿 | 免责申明

Copyright © 2020-2022 ZhangShiYu.com Rights Reserved.豫ICP备2022013469号-1