使用广义简约梯度法求解有约束优化问题

广义简约梯度法是简约梯度法的扩展,用于处理含有非线性约束的有约束优化问题,核心思想是用等式约束来减少优化变量的个数。

理论方法

首先考虑优化问题:
$$
\begin{aligned}
\min \quad f(X)\
\mathrm{ s.t. } \quad h_j(X)\leq0,&j=1,2,…,m\
l_k(X)=0,&k=1,2,…,l\
x_i^l\leq x_i\leq x_i^u,&i=1,2,…,n
\end{aligned}
$$

现对不等式约束增加松弛变量。可以描述为:
$$
\begin{aligned}
\min \quad f(X)\
\mathrm{ s.t. } \quad h_j(X)+x_{n+j}=0,&j=1,2,…,m\
h_k(X)=0,&k=1,2,…,l\
x_i^l\leq x_i\leq x_i^u,&i=1,2,…,n\
x_{n+j}\geq 0,&j=1,2,…,m
\end{aligned}
$$
再将模型统一简化为:
$$
\begin{aligned}
\min \quad f(X)\
\mathrm{ s.t. } \quad g_j(X)=0,&j=1,2,…,m+l\
x_i^l\leq x_i\leq x_i^u,&i=1,2,…,n+m\
\end{aligned}
$$
广义简约梯度是基于消元法的,现在将变量分解为两部分。
$$ X= \begin{bmatrix} Y\ Z \end{bmatrix} $$
其中Y是独立变量(design or independent variables),Z是非独立变量(state or dependent variables)。由于X满足约束条件 $g_j(X)=0$,所以
$$\mathrm{d}X=[C]\mathrm{d}Y+[D]\mathrm{d}Z $$
其中
$$[C]= \begin{bmatrix} \frac{\partial g_1}{\partial y_1}& \cdots&\frac{\partial g_1}{\partial y_{n-l}} \ \vdots&\ddots &\vdots \ \frac{\partial g_{m+l}}{\partial y_1}&\cdots&\frac{\partial g_{m+l}}{\partial y_{n-l}} \end{bmatrix} [D]= \begin{bmatrix} \frac{\partial g_1}{\partial z_1}& \cdots&\frac{\partial g_1}{\partial z_{m+l}} \ \vdots&\ddots &\vdots \ \frac{\partial g_{m+l}}{\partial z_1}&\cdots&\frac{\partial g_{m+l}}{\partial z_{m+l}} \end{bmatrix} $$
随着x的变化,约束g恒为零,所以 $$\mathrm{d}g=0$$ 所以
$$
\mathrm{d}Z=-[D]^{-1}[C]\mathrm{d}Y
\mathrm{d}f(X)=(\nabla^T_Y f-\nabla^T_Zf[D]^{-1}[C])\mathrm{d}Y $$
也可以表述为
$$\frac{\mathrm{d}f(X)}{\mathrm{d}Y}=G_R
G_R=\nabla_Y f-([D]^{-1}[C])^T\nabla_Zf
$$
其中 $G_R$ 就是广义简约梯度。
具体的实现算法可以表示为
初始化变量X,其中X包含Y和Z两部分。划分独立变量和非独立变量的标准为:划分应尽可能避免矩阵D为非奇异的情况;每个变量的上下界可以理解为非独立变量;由于松弛变量以线性约束的形式出现在问题中,所以松弛变量应当指定为非独立变量。
计算广义简约梯度。
检验收敛性。如果广义简约梯度的模足够小,则说明当前的变量值可以作为优化问题的最优解。
计算搜索方向。广义简约梯度可以理解为非约束优化问题目标函数的梯度,使用最速下降,共轭梯度等方法去确定搜索方向。
沿着搜索方向确定步长。使用以为搜索方法确定步长,值得注意的是这里的步长不能超过变量的上下界。

算例测试

算例1:线性约束问题

$$\begin{aligned} \min \quad obj =& x_1^3+3x_2^3+2(x_1-x_2)^2+2e^{-(x_1+x_2)}\ \mathrm{ s.t. } \quad &x_1-x_2\geq-2\ &x_1+x_2\leq25\ &x_1\geq0\ &x_2\geq0 \end{aligned}$$
计算迭代很快收敛,具体的数值结果可以去跑代码

算例2:非线性约束

$$\begin{aligned} \min \quad obj =& 4e^{x_1}-x_2^2+x_3^3-12\ \mathrm{ s.t. } \quad &x_1^2+x_2=20\ &x_1+x_3=7\ &x_1\geq0\ &x_2\geq0\ &x_3\geq0 \end{aligned} $$

代码

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
# Generalized Reduced Gradient Algorithm 

import numpy as np
import matplotlib.pyplot as plt
from sympy import *
import pandas as pd
def generalized_reduced_gradient():
ans_list = []
alpha_list = []
d_list = []
x1, x2, x3, x4 = symbols('x1 x2 x3 x4')
xvars = [x1, x2, x3, x4]
x_l = [0,0,0,0]
fx = x1**3+3*x2**3+2*(x1-x2)**2+2*exp(-x1-x2)
hxs = [x1-x2+2-x3,x1+x2-25+x4]
xcurr = np.array([2, 4, 5,1])

# x1, x2, x3 = symbols('x1 x2 x3')
# xvars = [x1, x2, x3]
# x_l = [0,0,0]
# fx = 4 * exp(x1) - x2 ** 2 + x3**3 - 12 # Function to be minimized
# hxs = [20 - x1**2- x2, x1 + x3 - 7] # Constraints to be obeyed
# xcurr = np.array([2, 4, 5]) # Starting solution

# x1, x2, x3 = symbols('x1 x2 x3')
# xvars = [x1, x2, x3]
# x_l = [0,0,0]
# fx = 4 * x1 - x2 ** 2 + x3**2 - 12 # Function to be minimized
# hxs = [ x1 + x3 - 7,x1+x2-50,20 - x1*x3] # Constraints to be obeyed
# xcurr = np.array([2, 4, 5]) # Starting solution

alpha_0 = 1 # Parameter initializations
gamma = 0.4
max_iter = 10
max_outer_iter = 30
eps_1, eps_2, eps_3 = 0.001, 0.001, 0.001


dfx = np.array([diff(fx, xvar) for xvar in xvars])
dhxs = np.array([[diff(hx, xvar) for xvar in xvars] for hx in hxs])
nonbasic_vars = len(xvars) - len(hxs)
opt_sols = []

for outer_iter in range(max_outer_iter):

print( '\n\nOuter loop iteration: {0}, optimal solution: {1}'.format(outer_iter + 1, xcurr))
ans_list.append([float(xcurr[i]) for i in range(len(xcurr))])
opt_sols.append(fx.subs(zip(xvars, xcurr)))

# Step 1

delta_f = np.array([df.subs(zip(xvars, xcurr)) for df in dfx])
delta_h = np.array([[dh.subs(zip(xvars, xcurr)) for dh in dhx] for dhx in dhxs]) # Value of h'_i(xcurr) for all i
J = np.array([dhx[nonbasic_vars:] for dhx in delta_h]) # Computation of J and C matrices
C = np.array([dhx[:nonbasic_vars] for dhx in delta_h])
delta_f_bar = delta_f[nonbasic_vars:]
delta_f_cap = delta_f[:nonbasic_vars]

J_inv = np.linalg.inv(np.array(J, dtype=float))
delta_f_tilde = delta_f_cap - delta_f_bar.dot(J_inv.dot(C))

# Step 2

if abs(delta_f_tilde[0]) <= eps_1:
break

d_bar = - delta_f_tilde.T # Direction of search in current iteration
d_cap = - J_inv.dot(C.dot(d_bar))
d = np.concatenate((d_bar, d_cap)).T
d_list.append([float(d[i]) for i in range(len(d))])
# Step 3

alpha = alpha_0

while alpha > 0.001:

print( '\nAlpha value: {0}\n'.format(alpha))

# Step 3(a)
print(xcurr.T , alpha , d)
v = xcurr.T + alpha * d

v = np.array([x_l[i] if v[i]<x_l[i] else v[i] for i in range(len(v))])
v_bar = v[:nonbasic_vars]
v_cap = v[nonbasic_vars:]
flag = False

for iters in range(max_iter):#一维搜索
print( 'Iteration: {0}, optimal solution obtained at x = {1}'.format(iters + 1, v))
h = np.array([hx.subs(zip(xvars, v)) for hx in hxs])
if all([abs(h_i) < eps_2 for h_i in h]): # Check if candidate satisfies all constraints
if fx.subs(zip(xvars, xcurr)) <= fx.subs(zip(xvars, v)):
alpha = alpha * gamma
break
else:
xcurr = v # Obtained a candidate better than the current optimal solution
flag = True
break

# Step 3(b)

delta_h_v = np.array([[dh.subs(zip(xvars, v)) for dh in dhx] for dhx in dhxs])
J_inv_v = np.linalg.inv(np.array([dhx[nonbasic_vars:] for dhx in delta_h_v], dtype=float))
v_next_cap = v_cap - J_inv_v.dot(h)
v_next_cap = np.array([x_l[i] if v_next_cap[i]<x_l[i] else v_next_cap[i] for i in range(len(v_next_cap))])
# Step 3(c)

if abs(np.linalg.norm(np.array(v_cap - v_next_cap, dtype=float), 1)) > eps_3:
v_cap = v_next_cap
v = np.concatenate((v_bar, v_cap))
else:
v_cap = v_next_cap
v = np.concatenate((v_bar, v_cap))
h = np.array([hx.subs(zip(xvars, v)) for hx in hxs])
if all([abs(h_i) < eps_2 for h_i in h]):

# Step 3(d)

if fx.subs(zip(xvars, xcurr)) <= fx.subs(zip(xvars, v)):
alpha = alpha * gamma # Search for lower values of alpha
break
else:
xcurr = v
flag = True
break
else:
alpha = alpha * gamma
break

if flag == True:
break

print( '\n\nFinal solution obtained is: {0}'.format(xcurr))
print( 'Value of the function at this point: {0}\n'.format(fx.subs(zip(xvars, xcurr))))

plt.plot(opt_sols, 'ro') # Plot the solutions obtained after every iteration
plt.show()
print(d_list)
print(ans_list)

if __name__ == '__main__':
generalized_reduced_gradient()

参考:Singiresu S. Rao, Engineering optimization: theory and practice. John Wiley & Sons, 2009.

网易云热榜热评爬虫

网易云爬虫

今日发现qq空间里面的长按评论自动生成网易云评论的功能么得了,很可惜,觉得有必要写一个网易云音乐评论的爬虫。

之前的爬虫大部分失效,只能自力更生,改进了部分之前失效的api可以沿用的就继续使用。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
import requests
from pyquery import PyQuery as pq
import pandas as pd
import random
import time
from lxml import etree
import json
from pandas.core.frame import DataFrame

headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.129 Safari/537.36'}

def scrape_index(url):
url = 'https://music.163.com/discover/playlist/?order=hot&cat=%E5%8D%8E%E8%AF%AD&limit=35&offset=1'
print(url)
response = requests.get(url,headers = headers)
html = etree.HTML(response.content)
name_list = html.xpath('/html/body/div[3]/div/ul/li/div/a/@href')
#print(response.text)
return name_list
def get_music():
list_01 = []
url = 'https://music.163.com/discover/playlist/?order=hot&cat=%E5%8D%8E%E8%AF%AD&limit=35&offset={page}'
for page in range(1,2): # 跑一页试试,如果跑全部,改为 range(0,1295,35)
url1 = url.format(page = page)
list = []
#print(url1)
for i in scrape_index(url1): # generator 遍历之后的i的类型仍然是qyquery类型
#i_url = i.attr('href') # attr 方法来获取属性
'''
获取歌单和评论均用了网易云音乐get请求的API,快速高效!
网易云歌单API
https://music.163.com/api/playlist/detail?id={歌单ID}
热评获取API
http://music.163.com/api/v1/resource/comments/R_SO_4_{歌曲ID}?limit=20&offset=0
'''#https://music.163.com/playlist?id=564322156
detail_url = 'https://music.163.com'+i #获取的url还需要替换一下符合API要求的格式

list.append(detail_url)
list_01.extend(list) # extend 对列表合并
#time.sleep(5+random.random()) # 文明爬虫
#print(list_01)
return list_01
def get_comment(music_list):
re = []
for l in music_list[:10]:
print(l)
#l = 'https://music.163.com/playlist?id=2900343697'
response = requests.get(l,headers = headers)#/html/body/div[3]/div[1]/div/div/div[2]/div[2]/ul/li[1]/a
html = etree.HTML(response.content)#/html/body/div[3]/div[1]/div/div/div[2]/div[2]/div/div[1]/table/tbody/tr[3]/td[2]/div/div/div/span/a
name_list = html.xpath('/html/body/div[3]/div[1]/div/div/div[2]/div[2]/ul/li/a/@href')
ans = []
for s in name_list:
url = 'https://music.163.com/api/v1/resource/comments/R_SO_4_'+s.split("id=")[-1]
res = json.loads(requests.get(url,headers = headers).text)
for j in res['hotComments']:
ans.append(j['content'].replace('\n', ' ').replace('\r', ' '))
re.append(ans)
return re

if __name__ == '__main__':
ans = get_comment(get_music())
#print(ans)
a = DataFrame(ans)
a.to_csv('test.csv',encoding="utf_8_sig")
for i in range(len(ans)):
for j in range(len(ans[i])):
print(ans[i][j])

会存到本地的test.csv地下,没有加延迟,建议挂代理爬取。

request爬取energyplus天气数据并整理二维字典

爬虫模块

思路以lxml模块为主进行网页解析。用zipfile和tempfile对下载到的zip文件进行解压。
运行代码会在当前文件夹下创建data,并在里面下载所有的天气数据。

值得注意的是,网站存在访问限制,由于有代理服务器,所以没有加sleep,也没有测试网站反爬虫的上限,如果没有代理谨慎使用,或者保守添加停等策略。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
import requests
import zipfile
import tempfile
from lxml import etree

def get_data(url):
#url = "https://energyplus.net/weather-download/asia_wmo_region_2/CHN//CHN_Anhui.Huoshan.583140_CSWD/all"
response = requests.get(url)
return url, response.content
def unzip(filename,data):
_tmp_file = tempfile.TemporaryFile() # 创建临时文件
#print(_tmp_file)

_tmp_file.write(data) # byte字节数据写入临时文件
# _tmp_file.seek(0)

zf = zipfile.ZipFile(_tmp_file, mode='r')
for names in zf.namelist():
f = zf.extract(names, './data/'+filename) # 解压到data目录文件下
print(f)
zf.close()

if __name__ == '__main__':
url_main = 'https://energyplus.net/weather-region/asia_wmo_region_2/CHN'
response = requests.get(url_main)
#print(response.content)
html = etree.HTML(response.content)
name_city = html.xpath('/html/body/div[2]/div/section/div/section/div/a/@href ')
print(len(name_city))

for i in range(len(name_city)):
#print(name_i)
s = name_city[i+315].split('/')
# print("https://energyplus.net/weather-download/asia_wmo_region_2/CHN//"+s[-1]+"/all")
url = "https://energyplus.net/weather-download/asia_wmo_region_2/CHN//"+s[-1]+"/all"
if "CSWD" not in url:
continue
print(i+315)
print(url)
url, data = get_data(url)
unzip(s[-1],data)
#exit(0)
# url = "https://energyplus.net/weather-download/asia_wmo_region_2/CHN//CHN_Anhui.Huoshan.583140_CSWD/all"
# url, data = get_data(url) # data为byte字节

更换三轮IP,爬完全部数据。如需其余国家的数据,修改url一行的路由即可。结果如图

数据整理模块

此处必须使用二维索引字典,因为第一索引是省份,第二索引是city名。二维索引字典增添键值对需要进行判断。具体详见代码。。。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
import os

filePath = "./"
for _,d,_ in os.walk(filePath):
break
# 很奇怪为什么是一个迭代对象?难道是迭代打开子目录?
def addtwodimdict(thedict, key_a, key_b, val):
if key_a in thedict:
thedict[key_a].update({key_b: val})
else:
thedict.update({key_a:{key_b: val}})

main_dic = dict(dict())
for name in d:
#print(name.split("."))
province,city = name.split(".")[0][4:],name.split(".")[-2]
#print(province,city)
# if province in main_dic.keys():
#main_dic[province][city] = name
addtwodimdict(main_dic,province,city,name)

#print(main_dic)
# import json
# print(json.dumps(main_dic,indent = 4))

结果如图

  • Copyrights © 2015-2023 galaxy
  • Visitors: | Views:

请我喝杯咖啡吧~

支付宝
微信