欢迎访问一起赢论文辅导网
本站动态
联系我们

手机:15327302358
邮箱:peter.lyz@163.com

Q Q:
910330594  
微信paperwinner
工作时间:9:00-24:00

机械论文
当前位置:首页 > 机械论文
基于复杂网络科学和人工智能算法的原油价格预测新方法
来源:一起赢论文网     日期:2020-10-25     浏览数:93     【 字体:

 

预测原油价格是一项具有挑战性的任务。为了改进这种预测方法,本文提出了一种综合数据波动网络(DFN)和多种人工智能(AI)算法的混合预测方法,称为DFN-AI模型。在所提出的DFN-AI模型中,采用复杂网络时间序列分析技术作为原始数据的预处理器,提取波动特征并重构原始数据,然后使用人工智能工具,如BPNN、RBFNN或ELM,对重建数据进行建模,并对未来数据进行预测。为了验证这些结果,我们检查了俄克拉荷马州库欣原油交易中心的每日、每周和每月价格数据。实证结果表明,所提出的DFN-AI模型(即DFN-BP、DFN-RBF和DFN-ELM)在预测方向和水平上都明显优于相应的单一AI模型。这证实了我们提出的原油价格非线性模型的有效性。此外,我们所提出的DFN-AI方法是稳健和可靠的,不受随机样本选择、样本频率或样本结构突变的影响。

Contents lists available at ScienceDirectApplied Energyjourna l homepage: w ww.els evier.com/loc ate/apenergyA novel hybrid method of forecasting crude oil prices using complexnetwork science and artificial intelligence algorithmsMinggang Wanga,b ,c, Longfeng Zhaoc, Ruijin Duc,d, Chao Wangc,e, Lin Chenc, f, Lixin Tiana,d ,,H. Eugene StanleycaSchool of Mathematical Science, Nanjing Normal University, Nanjing 210042, Jiangsu, ChinabDepartment of Mathematics, Nanjing Normal University Taizhou College, Taizhou 225300, Jiangsu, ChinacCenter for Polymer Studies and Department of Physics, Boston University, Boston, MA 02215, USAdEnergy Development and Environmental Protection Strategy Research Center, Jiangsu University, Zhenjiang 212013, Jiangsu, ChinaeCollege of Economics and Management, Beijing University of Technology, Beijing 100124, ChinafSchool of Management, Northwestern Polytechnical University, Xi'an 710072, Shanxi, ChinaHIGHLIGHTSA novel prediction paradigm (DFN-AI) is proposed based on complex network and AI algorithms.DFN analysis technique is performed to extract the fl uctuation features in original data.A new data reconstruction method is designed by using the extracted data.A certain arti fi cial intelligence tool is employed to model the reconstructed data.Empirical results demonstrate the e ff ectiveness and robustness of DFN-AI method.ARTIC LE I NFOKeywords:Complex networkArti fi cial intelligence algorithmsCrude oil price predictionABSTRAC TForecasting the price of crude oil is a challenging task. To improve this forecasting, this paper proposes a novelhybrid method that uses an integrated data fl uctuation network (DFN) and several arti fi cial intelligence (AI)algorithms, named DFN-AI model. In the proposed DFN-AI model, a complex network time series analysistechnique is performed as a preprocessor for the original data to extract the fl uctuation features and reconstructthe original data, and then an artificial intelligence tool, e.g., BPNN, RBFNN or ELM, is employed to model thereconstructed data and predict the future data. To verify these results we examine the daily, weekly, andmonthly price data from the crude oil trading hub in Cushing, Oklahoma. Empirical results demonstrate that theproposed DFN-AI models (i.e., DFN-BP, DFN-RBF, and DFN-ELM) perform significantly better than their cor-responding single AI models in both the direction and level of prediction. This con fi rms the effectiveness of ourproposed modeling of the nonlinear patterns hidden in crude oil prices. In addition, our proposed DFN-AImethods are robust and reliable and are unaffected by random sample selection, sample frequency, or breaks insample structure.1. IntroductionBecause crude oil is a basic energy source and its price volatilitiesstrongly impact a country's economic development, social stability, andnational security [1], accurately predicting crude oil price fl uctuationsis a consistently active topic of research. The research on crude oil pricefl uctuations being carried out internationally is made more complex bythe interplay among many factors including market supply anddemand [2], the US dollar exchange rate [3], speculative trading [4],geopolitical confl icts [5], and natural disasters [6]that introduces ahigh level of noise into the crude oil data. Thus the crude oil prices,which exhibit such complex volatility characteristics as nonlinearityand uncertainty, are diffi cult to forecast and any results obtained un-certain. Therefore, crude price prediction remains a huge challenge.Up to now, there has been a raft of literature discussing crude oilprice forecasting. Among these prediction models, one of the mostht tps ://doi.org/10.1016/j.apenergy.2018.03.148Received 4 January 2018; Received in revised form 5 March 2018; Accepted 28 March 2018Corresponding author at: School of Mathematical Science, Nanjing Normal University, Nanjing 210042, Jiangsu, China.E-mail addresses: magic821204@sina.com (M. Wang), zlfccnu@mails.edu.cn (L. Zhao), dudo999@126.com (R. Du), chaowanghn@vip.163.com(C. Wang),clnwpu@126.com (L. Chen), tianlx@ujs.edu.cn (L. Tian), hes@bu.edu (H. Eugene Stanley).Applied Energy 220 (2018) 480495Available online 30 March 20180306-2619/ © 2018 Elsevier Ltd. All rights reserved.Timportant models is econometric model. For instance, Lanza et al. [7]used cointegration and error correction models (ECM) to predict crudeoil prices from January 2002 to June 2002. Murat et al. [8] proposed avector error correction model (VECM) to forecast oil price movementsand crack spread futures. Baumeister et al. [9] used vector auto-regressive (VAR) to forecast WTI spot price. Xiang et al. [10] used anautoregressive integrated moving average (ARIMA) model to predictthe Brent crude oil. Sadorsky [11] used several GARCH models toforecast the daily volatility in petroleum futures price returns. Fan et al.[12] introduced GARCH type models based on generalized error dis-tribution (GED) to examine the risk spillover e ff ect between West TexasIntermediate (WTI) and Brent crude oil markets. Kang et al. [13] thenproposed a variety of conditional volatility models, including GARCH,IGARCH, CGARCH, and FIGARCH, to forecast the volatility of crude oilmarkets, and found that the CGARCH and FIGARCH models can fore-cast volatility persistence. Mohammadi et al. [14] investigated the out-of-sample forecasting performance of four volatility models GARCH,EGARCH, APARCH and FIGARCH over January 2009 to October 2009.Hou and Suardi [15] focused on two crude oil markets, Brent and WTI,considered an alternative approach involving nonparametric method tomodel and forecast oil price return volatility. The main results of theabove mentioned econometric models are listed in Table 1 (the upperpart). In essence there are two diff erent types of econometric models.The first is a structural model of the price of oil, including ECM [7],VECM [8], VAR [9] et al., depending on fundamental data such asdemand and supply and is implemented through the use of a linearregression. This structural modeling approach includes explanatoryvariables other than just the past data of oil prices into the process. Thesecond is a time series approach, including ARIMA [10] , GARCH-typemodels[11 15] et al., only looking at the history of price to determinefuture price movement. Because they are able to capture time-varyingvolatility, econometric models have improved the accuracy of fore-casting, but because they assume the data to be stationary, regular, andlinear they cannot accurately model time series that are complex, ir-regular, and nonlinear [7 15].In addition to the classic econometric approaches, arti ficial in-telligence (AI) methods have been used to uncover the inner complexityof oil prices. For example, Moshiri et al. [16] set up a nonlinear andfl exible arti fi cial neural network (ANN) model to forecast daily crudeoil futures prices traded at the New York Mercantile Exchange(NYMEX). Kaboudan [17] evaluated forecasts produced by twocompeting compumetric forecasting methods: genetic programming(GP) and artificial neural networks (ANN). Mostafa et al. [18] fore-casted oil prices using gene expression programming (GEP) and arti fi-cial neural network (ANN) models. Kaboli et al. [19,20] developedarti ficial cooperative search algorithm (ACSA) and GEP to providebetter- fi t solution and improve the accuracy of estimation. Xie et al.[21] proposed a support vector machine (SVM) to forecast crude oilprices and compared its performance with ARIMA and back propaga-tion neural network (BPNN). Shin et al. [22] employed semi-supervisedlearning (SSL) to forecast the upward and downward movement of oilprices. Yusof et al.[23] proposed least squares support vector machine(LSSVM) method of the oil futures price forecasting. Zhao et al. [24]introduced deep learning approach (SDAE) for WTI crude oil spot priceforecasting. The main results of the above mentioned AI models arelisted in Table 1 (the middle part). Unlike econometric models [7 15],arti ficial intelligence methods are able to model such complex char-acteristics as nonlinearity and volatility. Arti fi cial intelligence methodsalso have disadvantages, For example, ANN and BPNN often su ffer fromlocal minima and over- fi tting, while other AI models, such as SVM andGP including ANN, are sensitive to parameter selection [16 24].Because single prediction models including both econometricmodels and AI methodsare limited, many studies are now using hy-brid methods to forecast crude oil prices. Some typical literature re-garding the hybrid methods for crude oil price forecasting can be foundin Table 1 (the bottom part). Overall, the hybrid methods often implythe combination of interdisciplinary methods to use their strengths andcan be roughly classified into two categories: (1) the combinationamong AI models, such as the empirical mode decomposition (EMD)based neural network ensemble learning paradigm [25] , the hybridmodel combining the dynamic properties of multilayer back propaga-tion neural network and the recent Harr A trous wavelet decomposition,i.e., HTW-MPNN [26] , the hybrid model built upon EMD based on thefeed-forward neural network (FNN) modeling framework incorporatingthe slope based method (SBM), i.e., EMD-SBM-FNN [27] , a decom-position-and-ensemble learning paradigm integrating ensemble em-pirical mode decomposition (EEMD) and extended extreme learningmachine (EELM), i.e., EEMD-EELM[28] , the compressed sensing basedlearning paradigm, integrating compressed sensing based de-noising(CSD) and certain arti fi cial intelligence (AI), i.e., CSD-AI[29] , the al-ternative approach based on a genetic algorithm and neural network(GA-NN) [30] , the hybrid AI system framework integrating web-basedNomenclatureX original time seriesN data sizeP fluctuation series of XS symbol seriesk number of the symbolssisymbolL length of the sliding windowl sliding stepr thresholdFMithe ith fluctuation modesM number of the fl uctuation modesM number of di fferent fl uctuation modesυitnode numbered i at timet+Vijt 1set of all out-neighbor nodes ofυitW weightη learning rateE the gradient of error functionBithe prototype of the input vectorsσithe width of RBF unit iµX predicted dataf x( ) the activation functionbithe bias of hidden node iβithe weights of hidden neuron i to output neuronsE X extracted dataSX sub data of original dataα the selectivity coeffi cientRX the reconstructed dataAbbreviationsDFN data fluctuation networkBPNN back propagation neural networkRBFNN radial basis function neural networkELM extreme learning machineDFN-BP hybrid model based on DFN and BPNNDFN-RBF hybrid model based on DFN and RBFNNDFN-ELM hybrid model based on DFN and ELMMAPE mean absolute percentage errorRMSE root mean square errorDstat directional statisticDMS Diebold-Mariano statisticM. Wang et al. Applied Energy 220 (2018) 480495481Table 1Summary of studies on crude oil price forecasting via various methods.Types Typical literature Forecasting models Forecasting period Data type Main resultsEconometric models Lanza et al. [7] ECM 2002/01 2002/06 Daily The cointegration marginally improves static forecastsMurat et al.[8] VECM 2000/012008/02 Weekly VECM outperforms the random walk model (RWM)Baumeister et al. [9] VAR 1991/012010/12 Monthly VAR tend to have lower MSPE at short horizons than AR and ARMAXiang [10] ARIMA 2012/112013/04 Daily ARIMA possess good prediction e ffect and can be used as short-term predictionSadorsky [11] GARCH type models 1988/022003/01 Daily TGARCH modelfi ts well for heating oil and natural gas and the GARCH model fi ts well for crude oil andunleaded gasolineFan et al.[12] GED-GARCH 1987/052006/08 Daily GED-GARCH model has superior power in the out-of-sample forecast compared with the popular HSAFmethodKang et al. [13] CGARCH and FIGARCH models 1992/012006/12 Daily CGARCH and FIGARCH models provide superior performance in out-of-sample volatility forecasts GARCHand IGARCH modelsMohammadi et al.[14] GARCH, EGARCH, APARCH,FIGARCH 2009/012009/10 Weekly APARCH model outperforms the others.Hou et al. [15] Nonparametric GARCH model 1992/012010/07 Daily The non-parametric GARCH model has a better forecast than the traditional GARCH modelAI models Moshiri et al.[16] ANN 1983/042003/01 Daily ANN model outperforms ARMA and GARCH modelsKaboudan [17] GP 1993/011998/12 Monthly GP has advantage over random walk predictions while the ANN forecast proved inferiorMostafa et al.[18] GEP 1986/012012/06 Daily GEP model outperforms the ANN and ARIMA modelsXie et al.[21] SVM 1970/01 2003/12 Monthly SVM outperforms ARIMA and BPNNShin et al. [22] SSL 1992/01 2008/06 Monthly SSL outperforms ANN and SVMYusof et al.[23] LSSVM 2006/012012/06 Daily LSSVM forecasting model outperforms SVM and RBF network modelZhao et al. [24] SDAE 1986/01 2016/05 Monthly SDAE has superior power compared with traditional machine learning modelsHybrid models Yu et al.[25] EMD based neural network ensemble learningparadigm1986/01 2006/09 Daily EMD-based neural network ensemble learning paradigm performs better than traditional AI modelsJammazi et al. [26] HTW-MBPNN 1988/012010/03 Monthly HTW-MBPNN perform better than the conventional BPNNXiong et al. [27] EMD-SBM-FNN 2000/01 2011/12 Weekly EMD-SBM-FNN using the MIMO strategy is a very promising prediction technique with high-qualityforecasts and accredited computational loads for multi-step-ahead crude oil price forecastingYu, Dai et al. [28] EEMD-EELM 1986/01 2013/10 Daily EEMD-EELM is signifi cantly superior to single EELMYu, Zhao et al. [29] CSD-AI 2011/01 2013/07 Daily CSD-AI models outperform their single benchmarks in both level and directional predictionsChiroma et al. [30] GA-NN 2008/05 2011/12 Monthly The GA-NN approach is able to improve prediction accuracy, and to simplify the complexity of the NNmodel structureWang et al.[31] A hybrid AI system framework 2000/01 2002/12 monthly The proposed approach is signifi cantly e ffective and practically feasibleZhang et al. [1] EEMD-LSSVM-PSO-GARCH 2013/01 2013/12;2000/01 2008/07;1990/01 2013/07Daily;weekly;monthlyThe newly proposed hybrid method has a strong forecasting capability for crude oil pricesM. Wang et al. Applied Energy 220 (2018) 480495482text mining and rule-based expert system with ANN-based time seriesforecasting techniques [31] . (2) The combination of AI methods andeconometric methods, such as a hybrid method that combines EEMD,least square support vector machine particle swarm optimization(LSSVM-PSO), and the GARCH model, i.e., EEMD-LSSVM-PSO-GARCH[1]. Empirical analysis results repeatedly demonstrate that hybridforecast methods are more accurate than single methods (see Table 1).This is the case because hybrid methods combine single models suchthat the merits of each o ff set the defects of others. At the same time, thecalculation process required in hybrid methods is complicated. In otherwords, the hybrid forecasting models are more likely to be advocated inrecent literature, which also gives some hints for our research in thispaper.As mentioned above, the most important challenge in modelingcrude oil price is the complexity in terms of interactive inner factors,which leads to a high level of noise corrupting the original data andthus largely weakening the prediction capability of models. Actually,noise reduction techniques already being employed include entropy-based wavelet de-noising[32] , hybrid slantlet de-noising based on theleast squares support vector regression model [33] , exponentialsmoothing based on neural networks [34] , and the extended Kalmanfi lter method [35] . However, all of these techniques have a fatalweakness: their fixed basis design makes them sensitive to parametersettings. In recent years complex network theory has been widely usedto analyze nonlinear time series. Complex network theory uses algo-rithms to transform a nonlinear time series into corresponding complexnetworks and uses the typology of complex networks to draw out reg-ular fluctuation patterns. The application of complex network theoryhas been widely e ff ective in determining the essential characteristics ofa time series. It has produced a number of new algorithms that cantransform a time series into a complex network system. Among themare the visibility graph (VG) [36] , the pseudo-periodic time seriestransform algorithm [37] , phase space reconstruction [38] , and thecoarse graining of phase space [39] . A large number of researchers haverecently applied complex network theory to the study of energy pricefl uctuations and have produced many valuable results[40 51]. In otherwords, the rapid development of complex network time series analysistechnology provides a new perspective for eliminating the noise in theoriginal data.Therefore, due to the complexity in terms of high level of noise incrude oil price data, this paper focus on the following questions: How toeliminate the noise from original data using the complex network timeseries analysis techniques? How to determine regular fl uctuation pat-terns and extract the nonlinear patterns hidden in original data effi -ciently? How to enhance the robustness of analysis and forecastingperformance for crude oil price? To date there have fewer related stu-dies. To address the above questions, here we combine complex net-work analysis and AI predictive methods to formulate a novel hybridprediction model for crude oil price fl uctuations. Diff erent from theprevious studies and four main novel contributions in our studies are asfollows: (1) a complex network analysis of the original data is firstperformed to extract the fluctuation features using the topologicalstructure of the network. (2) A new data reconstruction method is de-signed by using the extracted fluctuationfeatures data and the originaldata. (3) A certain AI tool, e.g., BPNN, RBFNN, or ELM, is employed tomodel the reconstructed data and generate the final prediction. (4)Empirical results demonstrate that the proposed datafl uctuation net-work (DFN) AI models (i.e., DFN-BP, DFN-RBF, and DFN-ELM) performsignificantly better than their corresponding single AI models in boththe direction and level of prediction. And our proposed DFN-AImethods are robust and reliable and are una ffected by random sampleselection, sample frequency, or breaks in sample structure.We organize the rest of this paper as follows. Section2 provides adetailed description of how the proposed model was formulated. Sec-tion 3 presents a sensitivity analysis of the parameters. Section 4describes and discusses the crude oil forecasting results. At the end ofthe paper we present our conclusions and propose possible future linesof research.2. Methodology2.1. Complex network analysis of time seriesComplex network theory has been recently applied to the analysis oftime series and has yielded high-quality results [36 39]. There are twosteps in this approach. The first uses algorithms to map the time seriesinto a complex network. The second uses the topological structure ofthe network to uncover the essential characteristics of the time series.2.1.1. Map the time series into a datafl uctuation network (DFN)Here we use coarse geometry theory to map the time series into adirected and weighted network[39,43]. In the calculation process wedenote the time series = XXt{()}, with =tN1,2, , , and the fluctuationseries = P Pt {()}, which we obtained using=−−−P tXt XtXt()() ( 1)(1),(1)where = XX(0) (1) and = P (1) 0. We next de finek symbols ss s {,,,},k 12which denote the fluctuation state of the time series at timet . To pre-serve the symmetries of symbols, k satis fies two conditions: it is an oddnumber and k 3 . We then set k 2 thresholds……− − −− − −rr rr r { , ,,,, , }kk k k 3/2 5/2 0 5/2 3/2, where = r 00. Using these thresholds,we map the fl uctuation series Pt {()}into a continuous symbol series= SSt{()}and ∈… St ss s () { , , , }k 12 . For example, when = k 5 we have=⎧⎨⎪⎩⎪><==−⩽ <<= StsPt rsPtrsPt rsr PtsPt rrNPt (),() ,,0 ( ) ,,() 0,,()0,,() ,1|()|11213041511(2)Note that we can either increase or decrease the number of symbolsin diff erent time series according to what the problem requires. Thesliding window method [31] then used to divide the continuous symbolsequence St {()}into modes. Here there are =+ M NLl [( )/ 1] fl uc-tuation modes, where L is the sliding window length and l the slidingstep. The diff erent fluctuation modes are denoted to be FMi,=iM1,2, , , ¶ ⩽ M M where eachfluctuation mode is a network nodeand transformations among modes are the edges between nodes. Thefl uctuating modes evolve into each other with time, and the weight ofan edge is defined to be the transformation frequency. Thus, the di-rected and weighted data fluctuation network is constructed and de-noted to be DFN r k L l (,,, ) . In summary, we use fi ve characterssssss {,,,,} 12345 to represent the fl uctuation sequence, let = L 5 , and =l 1,then the mapping process of DFN r k L l (,,, ) is shown in Fig. 1.2.1.2. Extract the fluctuation features accordingly to the topologicalstructure of the DFNUsing the topological structure of the DFN, the fl uctuation char-acteristics of time series can be characterized. For example, let = k 5 ,= L 5 , =l 1 and denote the node at time t to be υit, where =t M 1,2, , ,=+ M NL1, and =iM1,2, , . Since the DFN is a directed network,with the exception of thefirst and the last nodes every node has an in-neighbor node and out-neighbor node. Using the construction methodof the DFN (see Section 2.1.1), we see two types of connection betweennode¶υiMand its out-neighbor nodes:(i) When¶υiMhas no out-neighbor node. If the mapped symbol series ofthe time series is = =St ssssssssss {()} { }t 1,2, ,10 3 4 4 1 2 1 5 5 1 2, then thenodes of the DFN are = υ sssss1134412, = υ sssss224412 1, = υ sssss3341215,M. Wang et al. Applied Energy 220 (2018) 480495483= υsssss 4412155, = υ sssss552155 1and = υ sssss6615512. Fig. 2 (a) showsthe network structure. Here node υ66(shown in green1) has no out-neighbor node.(ii) When¶υiMhas out-neighbor nodes. If the mapped symbol series ofthe time series is= =St ssssssssssssss {()} { },t 1,2,,14 44124124141241then the nodes of the DFN are = υ sssss114412 4, = υsssss 224124 1,= υsssss 331241 2, = υsssss 4424124, = υsssss 254124 1, = υsssss 561241 4,= υ sssss672414 1, = υ sssss7841412, = υsssss 8914124 and = υsssss 2104124 1.Fig. 2(b) shows the network structure. Here node υ210(shown in green)has two out-neighbor nodes. We can also find that each node here hasan out-neighbor node.Fig. 2 (b) shows the network structure of the DFN when the size ofthe sample data is suffi ciently large. Fig. 2(a) shows a network structureof the DFN that requires an adjustment of the parameters. There are twoways of adjusting this structure so that it conforms to that inFig. 2(b).(i) Reduce the number of symbols in the coarse graining process andadopt three characters, i.e., = k 3 , when coarse graining the fluctuationseries. (ii) Reduce the number characters during the construction of thefl uctuation mode, e.g., adopting a three-character combination (i.e.,= L 3 ). Using these two methods a small sample dataset can be con-verted into a usable DFN. The sensitivity analysis parameters of theDFN is described in Section 3 .After building the DFN we select thetarget node based on the input data. Using the above analysis, the targetnode must have an out-neighbor node, e.g., in Fig. 2(b) the green nodeis the target node and nodes υ33and υ56are its out-neighbor nodes. Theset of all out-neighbor nodes of target υitis=+→ ∈∈ V υ{} ijtijtjMtM1[1, ], [1, ] (3)The parameters of DFN are usable when the node that appears attime +t 1 is an element in the set+Vijt 1. Thus there are two ways ofextracting the futurefl uctuation features of the target node, (i) using allthe elements in set+Vijt 1or (ii) using the element with the greateststrength in set+Vijt 1. Fig. 3 shows a summary of the complex networkanalysis of a crude oil price series.2.2. Arti fi cial intelligence algorithmsA series of artifi cial intelligence algorithms for forecasting crude oilprices were recently developed, and they have proven to be superior toFig. 1.The mapping process of DFN r k L l (,,,).Fig. 2.Two different types of DFN.Fig. 3. The process of the complex network analysis of crude oil price.1For interpretation of color in Fig. 2, the reader is referred to the web version of thisarticle.M. Wang et al. Applied Energy 220 (2018) 480495484traditional forecasting models [16 24]. Here we focus on three, (i) theback propagation neural network (BPNN), (ii) the radial basis functionneural network (RBFNN), (iii) and the extreme learning machine (ELM)[52 55].2.2.1. Back propagation neural network (BPNN)The back propagation neural network (BPNN) model is one of themost widely used artificial intelligence algorithms for classification andprediction [52] . This technique is an advanced multiple regressionanalysis that deals with responses that are more complex and non-linearthan those of standard regression analysis. The basic formula of the BPalgorithm is=−− W nWn Wn () ( 1)Δ (), (4)where=∂∂−+ Wn ηEWnγWn Δ () (1) Δ(1),(5)where W is the weight, η is the learning rate, E is the gradient of errorfunction, and − γWn Δ(1) is the incremental weight. Because the BPNNuses the gradient method the learning convergent velocity is slow and aconvergence to the local minimum always occurs. In addition, the se-lection of the learning and inertial factors a ff ects the convergence of theBPNN, which is determined by the level of experience. Thus the use-fulness of the BPNN is limited.2.2.2. Radial basis function neural network (RBFNN)The radial basis function (RBF) neural network has been widelyapplied in the neural network community [53] . The RBFNN is a map-ping, i.e., RRrs. When R Xris the input vector and R Bir,⩽⩽iu (1 ) the prototype of the input vectors, the output of each RBFunit is==R XRXB i u () (‖ ‖), 1,2,,,iii (6)where ‖ ·‖ is the Euclidean norm on the input space. Because it can befactored, the Gaussian function is the preferred radial basis function.Thus=⎡⎣⎢−−⎤⎦⎥R XXBσ() exp‖‖,iii22(7)where σiis the width of RBF unit i . The output YX() junit j of an RBFNNis=×=YX RX Wji () () (,),jiui1(8)where = R 10, W ji(, ) is the weight or strength of receptive field i to theoutput j , and W j (,0) is the bias of output j . Geometrically, an RBFNNpartitions the input space into several hyper sphere subspaces. Thisintroduces several challenges into the development of the RBF algo-rithm, e.g., over- fi tting, overtraining, the small-sample eff ect, and thesingular problem.2.2.3. Extreme learning machine (ELM)The extreme learning machine (ELM) was originally applied tosingle hidden-layer feed-forward neural networks and then extended togeneralized feed-forward networks [54,55]. For a set of training sam-ples=XC {( , )}jj jN1with N samples and C classes, the single hidden layerfeed-forward neural network with h hidden nodes and activationfunction f x( ) is∑∑=+====βf X βf WX b Y j N ( ) ( · ) , 1,2, , ,ihi ijihiij i j11 (9)where =Xxxx [,,,] jjjjnT12 , =Cccc [,,, ] jjjjmT12 , =W ww w [,,,] iiiinT12 , andbiare the input, its corresponding output, the connecting weights ofhidden neuron i to input neurons, and the bias of hidden node i ,respectively, and =… β ββ β [,,, ] iiiimT12 are the connecting weights ofhidden neuron i to output neurons, and Yjthe actual network outputwith respect to input Xj. Because the hidden parameters Wb {,}ii can berandomly generated during the training period without tuning, ELMsolves a compact model that minimizes the error between Cjand Yj, i.e.,Hβ C min‖ ‖ ,βF(10)with……=⎡⎣⎢++⋮⋯⋮++⎤⎦⎥=⎡⎣⎢⋮⎤⎦⎥=⎡⎣⎢⋮⎤⎦⎥HWW Wbb bfWX b fWX bfWX b fWX bβββCcc( , ,, ,,,,)(· ) (· )(· ) (· ),,.hhhhNhNh ThTTNT12 1211 1 11111(11)HereH is the hidden layer output matrix and β the output weightmatrix. Eq. (10) is a least squares problem with a solution̂=−β HC1,whereH1is the pseudo-inverse of H . The merit of ELM is that only theoutput weights are needed when randomly selecting the hidden nodeparameters (input weights and bias). Its weakness is that it cannot ef-fectively handle noisy time series.2.3. The novel hybrid method for crude oil price forecastUsing these techniques, a novel hybrid DFN-AI learning paradigm isformulated for crude oil prices (see Fig. 4). There are three steps in theproposed DFN-AI learning paradigm, i.e., extract the fl uctuation fea-tures, reconstruct the data, and formulate the forecast.STEP 1: Construct the data fl uctuation network (DFN) and extractthe fl uctuation features.The original data is first mapped on a directed and weighted datafl uctuation network (DFN) using the complex network analysis of thetime series shown in Figs. 1 and 4. We then use the topological structureof the DFN to extract the fl uctuation features of the crude oil prices. Forexample, if the original data = XXt{()}, with =tN1,2, , , assume thatthe length of sliding window is L , the sliding step is l , then the originaldata X can be rewritten¶¶¶ =⎡⎣⎢⋯⋮⋮⋯⋮++−⎤⎦⎥ =XXX XMXL XL XL MXX X(1) (2) ( )() ( 1) ( 1)[].M 12(12)where =+XXiXiL [(),,( 1)] iT, =i M 1,2, , , =−− M NLl ()/1, thenusing the method of Section 2.1.2, the extracted fl uctuation data fea-tures are=⎡⎣⎢⋯⋮⋮⋯⋮++−⎤⎦⎥ =E XEX EX EX DEX L EX L EX L DXX X(1) (2) ( )() ( 1) ( 1)[].ii iD 12(13)where < DM, ¶ ∈ ijM[1, ] , =j D 1,2, , .STEP 2: Reconstruct dataWe introduce the sub data of original data, = SX X t {()}, with=+t NαN N []1,,, ∈ α (0,1] , where x[ ] is integer-valued function, αis the selectivity coeffi cient, when = α 1, then = SX X. Using the subdata SX and the extracted fl uctuation features data E X , the new dataRX is obtained for further analysis, i.e.,M. Wang et al. Applied Energy 220 (2018) 480495485=⎡⎣⎢⋯⋯−+ ⋮⋮⋯⋮ ⋮⋯⋮++− ⋯⎤⎦⎥ RXEX EX EX D SX SX αN LEX L EX L EX L D SX L SX αN(1) (2) ( ) (1) ([ ] 1)() ( 1) ( 1) () ([ ])(14)or¶¶=⎡⎣⎢⋯−+ ⋯⋮⋮⋯⋮ ⋮⋯⋮++− − +⋯ −−⎤⎦⎥RXEX EX EX D X N αN X MEX L EX L EX L D X N αN L X L M(1) (2) ( ) ( [ ] 1) ( )() ( 1) ( 1) ( [ ] ) ( 1).(15)STEP 3: Forecasting using artifi cial intelligence algorithmsAfter data reconstruction, we use AI techniques BPNN, RBFNN, orELM to model the reconstruction data RX , then a novel hybrid DFN-AIlearning paradigm for crude oil price can be formulated, as illustratedin Fig. 5 . Combining the data fl uctuation network analysis technologyand BPNN, RBFNN, or ELM, the hybrid prediction model DFN-BP, DFN-RBF and DFN-ELM can be built, respectively.2.4. Performance evaluation criteriaTo measure the forecasting accuracy of these proposed methods, weapply the widely-used mean absolute percentage error (MAPE) and rootmean square error (RMSE) [1,17,23,46] methods, defi ned asµ∑ ==MAPENXt XtXt1()()().tN1(16)andµ=∑ −=RMSEXt XtN(() ()).tN12(17)whereµXt( ) and Xt()are the predicted and real values at time t , re-spectively, and N is the size of the dataset being tested. The MAPEtechnique measures the mean absolute relative error of the predictionmodels, and the RMSE technique measures their standard deviation. Inusing these error criteria we fi nd that the smaller the MAPE and RMSEvalues the greater the level of model accuracy. Our most importantconcern, however, is the directional tendency of data fl uctuations -whether they are upward, stable or downwardand we measure themusingµ∑ ==⎧⎨⎩++− ⩾=DstatNat atXt Xt Xt Xtotherwise1(), ()1, ( ( 1) ( ))( ( 1) ( )) 00,.tN1(18)The closer the Dstat value is to 1, the higher the accuracy of thedirectional prediction of the models, and the closer the Dstat value is to0, the lower the accuracy of their directional predictions.The Diebold-Mariano (DM) statistic [24,56] is used to measure thedi ff erences in the predictive accuracies of the forecasting models. Herethe loss function is set to the mean square prediction error (MSPE). Thenull hypothesis is that the MSPE value of the tested model is not lowerthan that of the benchmark model. The DM statistic is de fi nedFig. 5. The overall process of DFN-AI learningparadigm.Fig. 4. The procedures of DFN-AI algorithm for crude oil price forecasting.M. Wang et al. Applied Energy 220 (2018) 480495486= DMSDVM/,D (19)where = =Ddt( )M tM 11,µµ =− −− d t XtX t XtX t () (() ()) (() ()) test bench22,=+=V γγ2Dq q 0 1, = −γddcov( , )qttq.µXt() testandµXt( )benchare the pre-dicted values for Xt()calculated by the tested model and its benchmarkmodel, respectively, at timet .3. Sensitivity analysis of the parameters in datafl uctuationnetworkThe DFN (r , k , L , l ) is constructed by mapping the time series on thedirected and weighted network. The associated network then inheritssome of the time series structure. We examine, without loss of gen-erality, how the associated directed and weighted network inherits in-formation from the time series. We test four time series of 1000 dataeach, including a chaotic time series generated from a logistic map= μ( 4) , a chaotic time series generated from a Lorenz system=== abc ( 10, 28, 8) , an independent and identically distributed( i.i.d. ) random series from a uniform distribution = f xU () [0,1], and thecrude oil price series from 4 April 1983 to 30 March 1987. Three ofthese time series are dynamic systems and the fourth is a real priceseries. Thus their inner characteristics diff er. The parameters are set at=== kLl 5, 5, 1, and r1calculate==rNPt1|()|.tN11(20)Fig. 6 shows the network structure of DFNs mapped from the fourtime series types. Note that their structures (e.g., the number of thenodes and the node strength distribution) di ff er completely. Specifi-cally, the number of nodes M( ) are 100, 40, 440, and 554, respectively,all which are fewer than the number of fluctuation modes (i.e.,=+= M (1000 5) 1 996 ). This is the case because many fluctuationmodes are repeated, which indicates that there are no new and differentfl uctuation modes in many of the time windows. Fig. 7 (a) shows therelationship between the number of network nodes M( ) and the timeseries data size N( ) . Note that as the sample data size N( ) increases, thenumber of nodes M( ) also slowly increases, and when the sample datasize N( ) reaches a certain value the number of nodes M( ) will becomestable. This suggests that we can use previous fl uctuation modes tocharacterize the fl uctuation modes appearing in the short term future.In addition, because we need to understand the transforming relation-ship among the nodes to understand the evolution of the time series, weexamine how parametersr , k , and L of DFN a ff ect the number of nodes.Fig. 7(b) shows the evolving relationship between the number of nodesand the threshold r . Note that there is a complex relationship betweenthe number of network nodes M( ) and the thresholdr . In a practicalapplication, we determine the threshold r byfi rst setting the initialthreshold rn, e.g., using Eq. (20) to calculater1, selecting the r thresholdin the vicinity of rn, and calculating the corresponding loss function byusing diff erent r thresholds. We then construct the optimization model=Gr wgrst gr gr min ( ) ( ) . . ( ) ( ),iiiiin(21)wher e giis t h e l o s s f u n c t i o n a n d wiis the weight. Using Eq. (21) the o p-timal thre shold r is obtained that minimi zes the los s function. Fig. 7(c) and(d ) show t he evolu tio nary r elationship be tw een the nu mb er of n odes Mand p ar am eters k and L .We find that the n umber o f n etwork node s M ,thenumber of symbolsk , and the length of the slid ing window L are positiv elyFig. 6. Datafl uctuation network (DFN) associated to different time series.M. Wang et al. Applied Energy 220 (2018) 480495487correlated, i.e., a s parameters k and L increase, t he number of ne tw or knode s i ncrease. Thu s, t he parameters k and L can be d irectly d etermine dby using the characters of the time series.4. Crude oil price forecasting result analysisHere all the models are run 10 times using the MATLAB R2017bsoftware package. All programs are run on a Lenovo laptop computerwith an i5-4200U 1.60 GHz CPU and 4 GB of RAM. The sample data isobtained from the U.S. energy information administration (EIA)( http://www.eia.gov/). To analyze robustness three datasets are se-lected, i.e., daily, weekly, and monthly. To compare and analyze, ineach dataset we randomly select 200 sample data for training andtesting. In response to previous relevant research [57] we set at 9:1 thesize ratio between training and testing sets. Using the number of thesample data, we set parametersk =3,r =0,L = 5, andl = 1 to buildthe DFN, and use all the elements in set+Vijt 1to determine the futurefl uctuations of the target node. The BPNN, RBFNN, and ELM are stan-dard two-layer AI network models that have a hidden layer and anoutput layer. Note that a small number of hidden neurons causes in-accuracies in the correlation between inputs and outputs, and that alarge number produces local optimums. Hastie et al. [57] fi nd that thetypical number of nodes is in the 5 to 100 range, and thus using crossvalidation is unnecessary. Thus, the parameters are set as follows. ForBPNN we set the number of nodes in the hidden layer to the defaultvalue of 'newff ' command in MATLAB, and we set the other trainingparameters net.trainParam.epochs = 1000, net.trainParam.goal = 1e-6, and net.trainParam.lr = 0.01. For RBFNN we set the number ofnodes in the hidden layer to 90 and the radial basis function to theGaussian function. For ELM we set the number of nodes in the hiddenlayer to 8.4.1. WTI crude oil price forecasting4.1.1. Daily crude oil price forecastingThe daily prices from the Cushing, Oklahoma Crude Oil FutureContract 1 (Dollars per Barrel) from 4 April 1983 to 31 October 2017are used as sample data. From these data, to verify how capable theDFN-AI models are of being generalized, we randomly select 200sample data and use the previous 180 sample data (90%) as trainingsamples and the remaining 20 sample data (10%) as testing samples.The original training samples are selected in 10 diff erent periods.Fig. 8 (a) and (b) show the original training samples and the corre-sponding DFNs. Fig. 8(c) and (d) show the testing sample values of theFig. 7.The results of the parameters sensitivity analysis (a) the evolutionary relations between M and N, (b) the evolutionary relations between M and r , (c) theevolutionary relations between M and L, (d) the evolutionary relations between M and k.M. Wang et al. Applied Energy 220 (2018) 480495488crude oil prices in di ff erent periods and the forecasts of each model.Table 2compares the forecasting performances of the six methods, andthe last line in each period shows the average running time of eachmodel in the period. The last row ofTable 2 shows the mean value ofeach model in all periods. The values in boldface are the best MAPE,RMSE and Dstat performances among the six models.Ta ble 2compares the D FN-BP, DFN- RBF, and D FN-ELM hybr id mod elswith the ir respectiv e B PN N, RB FNN, an d ELM s ingle co unterparts. Notethat the directional and l evel prediction accur acies of the hybr id m od elsare better than those of the s in gle AI benchmark mo dels. The av erageM A PE v a lues of BPNN ( 0.0 156 6), RBF NN (0. 154 8), a nd E LM ( 0. 174 0) aremuch higher than the a ver age MAPE values of DFN- BP (0.0 135 9), D FN-RBF ( 0.0137), and DFN-ELM (0.01360), and the DFN-BP model has thelowes t MAPE . S im ilarly, the ave r age RMS E v alue s of BPNN ( 0. 948 48),R BFNN ( 0 .98 419 ), and ELM (1. 121 15) are m uch h igher t han t he av erageR MS E values of DFN-BP (0. 852 13), D FN-RBF (0.8 606 5), and DFN-ELM(0 .8 544 4), a nd the D FN-BP m od el has t he lowest RM SE.The directional prediction accuracy of the average Dstat values ofBPNN (0.47500), RBFNN (0.50500) and ELM (0.43500) is lower thanthe average Dstat values of DFN-BP (0.68000), DFN-RBF (0.69000), andDFN-ELM (0.71500). Note that the hybrid models achieve a higherDstat than their corresponding single AI models, and that the DFN-ELMmodel has highest Dstat. All of this indicates the proposed DFN-AItechnique is a promising tool for forecasting crude oil prices. This isbecause the traditional AI models (i.e., BPNN, RBFNN and ELM) areunable to extract, organize, and discriminate the information from theoriginal data [44] . Because there are many fluctuation features in thecrude oil price series, the traditional AI models cannot learn a usefulrepresentation, and their sample forecast is thus inferior. In contrast,the DFN-AI models use a complex network oil price algorithm to extractthe fl uctuation features of the crude oil price series, and thus theyimprove the predictive performance by reconstructing the data usingthe extracted information. Thus, daily observations tell us that theforecasted results of DFN-AI models are more reasonable and moreaccurate than the corresponding single AI models. In addition, the DFN-AI models are able to generalize when forecasting crude oil prices, i.e.,their forecasting power is not aff ected by the training and testingsample selection.4.1.2. Weekly crude oil price forecastingPrevious studies assume data frequency to be an important factorthat a ff ects the accuracy of crude oil price forecasting [1 31]. Thus weuse weekly data to examine the forecasting accuracy of DFN-AI models.Fig. 9 (a) and (b) show the original weekly training samples and thecorresponding DFNs. As in the daily sample data, we select the originalweekly training samples from ten diff erent periods. Fig. 9 (c) and (d)show the crude oil price forecasts of each model of the testing samplesin the di fferent periods.Table 3lists the forecasting performance resultsof the six methods.Table 3 compares the DFN-AI models with their respective singlecounterparts. Note that the weekly observations clearly indicate thatthe DFN-AI models are more accurate in both their directional and levelpredictions than the single benchmark models. The average MAPE va-lues of BPNN (0.02799), RBFNN (0.03072) and ELM (0.03015) arehigher than the average MAPE values of DFN-BP (0.02610), DFN-RBF(0.02687), and DFN-ELM (0.02603), and the DFN-ELM model has thelowest MAPE. The average RMSE values of BPNN (1.76251), RBFNN(1.87347) and ELM (1.77927) are higher than the average RMSE valuesof DFN-BP (1.71515), DFN-RBF (1.73190), and DFN-ELM (1.70225),and the DFN-ELM model has the lowest RMSE. For the directionalFig. 8. The daily sample data and forecast results: (a, b) the original training samples and the DFNs mapped from these sample data. (c, d) Actual testing samplesvalues and predicted series.M. Wang et al. Applied Energy 220 (2018) 480495489prediction accuracy, the average Dstat values of BP (0.52000), RBF(0.53000) and ELM (0.49500) are lower than the average Dstat valuesof DFN-BP (0.63000), DFN-RBF (0.645000), and DFN-ELM (0.64500),and the DFN-ELM model has the highest Dstat. This clearly indicatesthat data frequency is a robust factor in our new DFN-AI crude oil priceforecasting method.4.1.3. Monthly crude oil price forecastingTo further test the robustness of our proposed DFN-AI forecastingmethod, we examine monthly observations. To take into accountstructural breaks in the data when examining the forecasting perfor-mance, we use crude oil prices from February 1994 to January 2009and from December 1999 to November 2014 for our original trainingdata and test them with monthly observations from February 2009 toSeptember 2010 and from December 2014 to July 2016, respectively.Wefi nd that the crude oil prices in these periods have structural breaks[1,18,41]. Outside of these two sample periods, three other sampleperiods are also selected, i.e., June 1987 to January 2004, February1989 to September 2005, and December 1999 to July 2016. Fig. 10(a)shows the selected original training samples and the correspondingDFNs. Fig. 10(b) shows the testing sample crude oil price values indi ff erent periods and the forecasts provided by each model. Table 4liststhe forecasting performance results of the six methods.Table 4 lists and compares the DFN-AI model results with their re-spective single counterparts. Note that, based on monthly observations,the accuracy of both the directional and level predictions of the DFN-AImodels is better than their single benchmark counterparts. With regardto level prediction accuracy, the average MAPE values of BPNN(0.07647), RBFNN (0.10432), and ELM (0.10382) are higher than theaverage MAPE values of DFN-BP (0.07003), DFN-RBF (0.07788), andDFN-ELM (0.06916), and the DFN-ELM model has the lowest MAPE.The average RMSE values of BPNN (4.60547), RBFNN (6.52349) andELM (6.26923) are higher than the average RMSE values of DFN-BP(4.25353), DFN-RBF (4.66344), and DFN-ELM (4.17084), and the DFN-Table 2The errors and elapsed times of daily WTI crude oil price forecasting using the six methods.Training Testing Criteria BPNN DFN-BP RBFNN DFN-RBF ELM DFN-ELM1984/01/18 1984/10/02 1984/10/031984/10/30 MAPE 0.00762 0.00640 0.00874 0.00617 0.00698 0.00594RMSE 0.32236 0.29254 0.33550 0.28044 0.29118 0.27305Dstat 0.30000 0.70000 0.50000 0.75000 0.55000 0.80000Time(s) 5.30200 36.74100 3.43500 15.27500 1.00000 2.829001988/06/08 1989/02/23 1989/02/241989/03/23 MAPE 0.01567 0.01243 0.01438 0.01279 0.02008 0.01249RMSE 0.36350 0.30829 0.35877 0.31448 0.50126 0.31017Dstat 0.40000 0.70000 0.40000 0.70000 0.30000 0.70000Time(s) 5.20800 33.06400 3.44100 15.36200 0.96900 2.815001993/05/24 1994/02/08 1994/02/091994/03/09 MAPE 0.01483 0.01247 0.01550 0.01276 0.01442 0.01261RMSE 0.28055 0.26285 0.28203 0.26627 0.27137 0.25732Dstat 0.50000 0.65000 0.30000 0.55000 0.50000 0.65000Time(s) 7.28000 39.09000 3.30800 15.83400 1.05600 2.986001993/12/28 1994/09/14 1994/09/151994/10/12 MAPE 0.01098 0.01064 0.01034 0.01033 0.01170 0.01038RMSE 0.24365 0.24022 0.23447 0.23875 0.25587 0.23808Dstat 0.55000 0.75000 0.70000 0.80000 0.45000 0.80000Time(s) 5.84300 37.62600 3.46400 44.16300 1.06300 3.063001997/05/16 1998/02/03 1998/2/41998/03/04 MAPE 0.01225 0.01045 0.01177 0.00997 0.01534 0.01036RMSE 0.27124 0.25745 0.25661 0.25452 0.30291 0.25600Dstat 0.45000 0.75000 0.60000 0.70000 0.35000 0.70000Time(s) 6.11300 37.60900 3.40900 43.74900 3.12000 2.886002005/10/12 2006/06/30 2006/07/052006/08/01 MAPE 0.01171 0.01029 0.01060 0.01011 0.01301 0.01024RMSE 1.00221 0.92563 0.95441 0.92237 1.14353 0.91243Dstat 0.60000 0.75000 0.65000 0.75000 0.40000 0.80000Time(s) 5.26600 34.28100 3.33400 14.91300 0.95300 2.847002007/05/22 2008/02/06 2008/02/072008/03/06 MAPE 0.01807 0.01603 0.01952 0.01631 0.02244 0.01641RMSE 2.22894 2.06907 2.38964 2.09595 2.78182 2.09777Dstat 0.60000 0.75000 0.35000 0.70000 0.40000 0.75000Time(s) 5.14100 34.52800 3.45000 15.09800 0.92600 2.829002012/09/27 2013/06/14 2013/6/17 2013/07/15 MAPE 0.01291 0.01105 0.01595 0.01106 0.02258 0.01124RMSE 1.56477 1.40429 2.04951 1.40391 2.89178 1.43652Dstat 0.40000 0.55000 0.35000 0.60000 0.40000 0.65000Time(s) 5.07800 35.15500 3.41000 15.84300 1.37500 3.064002014/05/01 2015/01/15 2015/1/16 2015/02/13 MAPE 0.04152 0.03659 0.03783 0.03794 0.03759 0.03642RMSE 2.51419 2.13464 2.32341 2.20231 2.12963 2.12610Dstat 0.45000 0.55000 0.50000 0.60000 0.45000 0.60000Time(s) 5.12600 34.12700 3.09400 15.49000 0.95300 2.894002016/05/02 2017/01/19 2017/01/202017/02/16 MAPE 0.01102 0.00956 0.01017 0.00991 0.00988 0.00990RMSE 0.69340 0.62634 0.65759 0.62753 0.64219 0.63693Dstat 0.50000 0.65000 0.70000 0.75000 0.55000 0.70000Time(s) 5.11100 33.50500 3.12600 15.12800 0.93800 2.86000Average value MAPE 0.01566 0.01359 0.01548 0.01374 0.01740 0.01360RMSE 0.94848 0.85213 0.98419 0.86065 1.12115 0.85444Dstat 0.47500 0.68000 0.50500 0.69000 0.43500 0.71500Time(s) 5.54680 35.57260 3.34710 15.26960 1.23530 2.90730(The value in boldface represents the best performance amongst 6 models in terms of MAPE, RMSE and Dstat.)M. Wang et al. Applied Energy 220 (2018) 480495490ELM model has the lowest RMSE. With regard to directional predictionaccuracy, the average Dstat values of BPNN (0.45000), RBFNN(0.48000) and ELM (0.49000) are lower than the average Dstat valuesof DFN-BP (0.62000), DFN-RBF (0.600000), and DFN-ELM (0.63000),and the DFN-ELM model has the highest Dstat. This clearly indicatesthat our proposed DFN-AI crude oil price forecasting method is morerobust with respect to data frequency and structural breaks.This brings us to four conclusions. (i) The proposed DFN-AI models,i.e., the DFN-BP, DFN-RBF, and DFN-ELM, yield the best directional andlevel predictions. This is the case because a complex network analysisalgorithm of crude oil prices can extract their fl uctuation features and,using the extracted information, reconstruct the data and improvepredictive model performance. (ii) The proposed DFN-AI models areable to generalize the results of their crude oil price forecasting, i.e.,their forecasting power is not sensitive to training and testing sampleselection. (iii) The proposed DFN-AI models are robust with respect tothe data frequency and structural breaks. (iv) Although the last rows ofTables 2 4 indicate that the average running time of the proposed DFN-AI models has a higher computational cost than the correspondingsingle models, this is disappearing as a concern because computingpower is rapidly increasing and parallel computing hardware now incommon use.4.2. Diebold-Mariano (DM) testIn using the DM test to statistically con firm our conclusions, theDMS value of Eq. (19) and the p-value are used to measure how muchthe test model is an improvement over the benchmark model. Table 5lists the corresponding DMS values and p -values (in brackets). Note thatthe p-values of the DFN-AI forecasting models are over 10% smallerthan those of their single benchmarks in the three cases of crude oilprice data. This indicates that data reconstruction improves forecastingwith a confi dence level of 90%, i.e., the DFN-AI models are statisticallymore eff ective than their corresponding single models. Note that be-cause thep -values are above 10% neither the three DFN-AI models (i.e.,DFN-BP, DFN-RBF and DFN-ELM) nor the three single AI models (i.e.,BPNN, RBFNN and ELM) are unmistakably superior.5. ConclusionThis paper has proposed a novel hybrid prediction method bycombining a complex network time series analysis and arti ficial in-telligence algorithms. A complex network analysis of a time series isfi rst performed as a preprocessor of the original crude oil price data toextract the fl uctuation features, and then reconstruct the original data.Then an artificial intelligence tool is employed to model the re-constructed original data and gain a fi nal prediction.Wefirst analyze the sensitivity of the data fluctuation parametersand then test the performance of the new hybrid method (DFN-AI) withrespect to such factors as random sample selection, sample frequency,and sample structural breaks. From the empirical analysis, we drawfour conclusions. (1) A complex network analysis algorithm of crude oilprice can be used to extract the fl uctuation features of the crude oilprice and improve the predictive performance of traditional AI modelsby reconstructing the data using this extracted information. (2) Theproposed DFN-AI models (DFN-BP, DFN-RBF, and DFN-ELM) performsignificantly better than the traditional models in predicting both di-rection and level, indicating their ability to model nonlinear patternsFig. 9. The weekly sample data and forecast results: (a, b) the original training samples and the DFNs mapped from these sample data (c, d) Actual testing sampl esvalues and predicted series.M. Wang et al. Applied Energy 220 (2018) 480495491hidden in crude oil prices. (3) The forecasting performance of theproposed DFN-AI model is excellent irrespective of random sample se-lection, sample frequency, or sample structural breaks, indicating itsrobustness and reliability. (4) Although the average running time of theproposed DFN-AI models has a higher computational cost than that ofthe corresponding single models, this is disappearing as a concern be-cause computing power is rapidly increasing and parallel computinghardware now in common use.Thus the DFN-AI crude oil price fore-casting method is a useful tool for investors and analysts evaluatingprice trends and forecasting crude oil prices. For example, we can use acomplex network analysis algorithm to map crude oil prices on a di-rected and weighted data fluctuation network (DFN) using the topolo-gical structure of the DFN (e.g., node strength, cluster coefficient, andnode betweenness) in order to characterize the fluctuation character-istics of crude oil prices. Speci fi cally, we can use the link relationsamong the nodes to uncover the fl uctuation trends of crude oil prices.Thus, our new method can be used to capture the complex dynamicbehavior of crude oil prices. For our oil price prediction problem, bymapping time-series data into datafl uctuation network, the noise in thedata was removed and the underlying tendencies were revealed. Thisenables to resolve the complexity and irregularity of oil price predictionproblem caused by its intrinsic dynamics, and results in more accurateprediction than that o ff ered by the traditional prediction models. Notethat we set 200 sample data as an example for training and testing,moreover, we can set different scales of sample size depending on theneeds of the analysis. Here we discuss how what is the e ff ect of in-creasing the training set on the predicted results? For the sake of sim-plicity, we fi xed the model parameters set in the above paper, onlychanged the training set size. Fig. 11shows the prediction results of thesingle AI models and DFN-AI models under diff erent training set size. Itcan be seen that in Fig. 11, under each of these single AI and DFN-AImodels, both the level accuracy indicators MAPE and RMSE movedownward and then gradually go stable, while the direction accuracyindicator Dstat goes upward and then turns stable. Such tendenciesindicate that both the level and direction prediction accuracies of thesesingle AI and DFN-AI models improve as sample size increases.Table 3The errors and elapsed times of weekly WTI crude oil price forecasting using the six methods.Training Testing Criteria BPNN DFN-BP RBFNN DFN-RBF ELM DFN-ELM1984/03/23 1987/08/28 1987/09/041988/01/15 MAPE 0.02360 0.02327 0.02576 0.02618 0.02566 0.02339RMSE 0.64594 0.64113 0.70314 0.69752 0.64395 0.64214Dstat 0.50000 0.55000 0.40000 0.60000 0.45000 0.50000Time(s) 5.18800 33.59600 3.11000 14.71100 1.01600 2.848001989/12/22 1993/05/28 1993/06/041993/10/15 MAPE 0.02590 0.01827 0.02182 0.01986 0.02688 0.01835RMSE 0.56671 0.41464 0.52458 0.44756 0.59137 0.42591Dstat 0.35000 0.60000 0.50000 0.55000 0.40000 0.60000Time(s) 5.12500 33.71600 3.06300 14.94100 0.92200 2.829001992/11/06 1996/04/12 1996/04/191996/08/30 MAPE 0.03023 0.02810 0.02677 0.02606 0.03334 0.02693RMSE 0.78025 0.76047 0.75036 0.70696 0.85774 0.73953Dstat 0.55000 0.65000 0.75000 0.80000 0.45000 0.75000Time(s) 5.23500 33.36100 3.11000 14.80000 0.95300 2.861001999/07/23 2002/12/27 2003/01/032003/05/16 MAPE 0.03787 0.03686 0.05033 0.04169 0.04253 0.03836RMSE 1.69630 1.75929 1.89560 1.90146 1.65495 1.77796Dstat 0.50000 0.65000 0.50000 0.40000 0.45000 0.60000Time(s) 5.12700 33.58400 3.07800 14.85200 0.90600 2.797002001/06/22 2004/11/26 2004/12/032005/04/15 MAPE 0.03579 0.03393 0.04481 0.03242 0.03545 0.03305RMSE 2.05170 2.01826 2.61509 2.00607 2.01482 2.01028Dstat 0.65000 0.70000 0.55000 0.70000 0.65000 0.75000Time(s) 5.04800 33.63700 3.09500 14.89500 0.93800 2.829002002/06/07 2005/11/11 2005/11/182006/03/31 MAPE 0.02833 0.02860 0.02894 0.02674 0.02770 0.02741RMSE 2.15536 2.12314 2.27032 2.04847 2.08595 2.06149Dstat 0.60000 0.65000 0.55000 0.65000 0.55000 0.65000Time(s) 5.06900 33.56800 3.06300 14.66200 0.93700 2.829002008/03/07 2011/08/12 2011/08/192011/12/30 MAPE 0.02766 0.02664 0.03019 0.02638 0.02717 0.02645RMSE 3.13763 2.98420 3.26857 3.08068 3.12836 2.98981Dstat 0.50000 0.65000 0.55000 0.70000 0.50000 0.65000Time(s) 5.12600 33.51200 3.07900 14.86300 0.93700 2.813002009/02/20 2012/07/27 2012/08/032012/12/14 MAPE 0.01461 0.01408 0.01503 0.01483 0.01458 0.01430RMSE 1.94809 1.99358 1.88318 1.93221 1.97159 1.95198Dstat 0.50000 0.60000 0.55000 0.75000 0.55000 0.65000Time(s) 5.09500 33.46800 3.06300 14.77100 0.93800 2.875002011/01/21 2014/06/27 2014/07/042014/11/14 MAPE 0.02085 0.01768 0.01985 0.01701 0.02169 0.01752RMSE 2.29797 2.11034 2.23731 2.06173 2.40837 2.10342Dstat 0.45000 0.55000 0.40000 0.70000 0.45000 0.70000Time(s) 5.08400 33.53800 3.06400 14.65200 0.92200 2.766002012/01/06 2015/06/12 2015/06/192015/10/30 MAPE 0.03502 0.03358 0.04372 0.03758 0.04654 0.03453RMSE 2.34512 2.34650 2.58654 2.43634 2.43559 2.32001Dstat 0.60000 0.70000 0.55000 0.60000 0.50000 0.60000Time(s) 5.07200 33.69800 3.07800 14.86300 0.93800 2.81300Average value MAPE 0.02799 0.02610 0.03072 0.02687 0.03015 0.02603RMSE 1.76251 1.71515 1.87347 1.73190 1.77927 1.70225Dstat 0.52000 0.63000 0.53000 0.64500 0.49500 0.64500Time(s) 5.11690 33.56780 3.08030 14.80100 0.94070 2.82600(The value in boldface represents the best performance amongst 6 models in terms of MAPE, RMSE and Dstat.)M. Wang et al. Applied Energy 220 (2018) 480495492Comparing the prediction results of the single AI models and DFN-AImodels, wefi nd that the proposed DFN-AI models (DFN-BP, DFN-RBF,and DFN-ELM) perform signifi cantly better than the single AI (BPNN,RBFNN, and ELM) models in predicting both direction and level underdi ff erent training data size. Note that all the results in Fig. 11are ob-tained under the fi xed parameters set in Section 4 , in practical appli-cation, in order to achieve higher prediction accuracy, we need to setappropriate parameters of DFN-AI models according to the size andstructure characteristics of the sample data. For example, if the sampledata size is large, parameter k and L can be set to larger values andparameter α can be set to smaller value.In addition to examining crude oil price data, our DFN-AI methodcan also address other forecasting tasks, especially when complex, ir-regular, and highly nonlinear data are involved. In practical applica-tion, the basic calculation of the method is: first perform DFN as apreprocessor for the original data to reconstruct the data, and then use acertain powerful AI tool such as SVM, deep neural networks (DNN) toconduct prediction for the reconstructed data. In the process of buildingthe DFN-AI model, choosing suitable parameters (i.e., r, k , and L)tobuild the data fl uctuation network determines the quality of data re-construction and the predictive accuracy of the model. In future studies,we will do further research on how to determine the optimal para-meters. Note that when we reconstruct the training data in this paper,we use all the neighbor nodes of the target node. There are other to-pological indicators for describing the topological structure of a datafl uctuation network, such as K-core centrality, the H index, and com-munity structure. Further research is needed to make use of these to-pological indicators when reconstructing training data. Because thevolatility of crude oil prices is complicated, future research couldcombine the latest complex network theory, an econometric model, andFig. 10. The monthly sample data and forecast results: (a) the original training samples and the DFNs mapped from these sample data. (b) Actual testing samplesvalues and predicted series.Table 4The errors and elapsed times of monthly WTI crude oil price forecasting using the six methods.Training Testing Criteria BPNN DFN-BP RBFNN DFN-RBF ELM DFN-ELM1987/062002/05 2002/62004/01 MAPE 0.07206 0.06731 0.07564 0.06816 0.07061 0.06649RMSE 2.42456 2.38523 2.61251 2.37701 2.39976 2.35643Dstat 0.40000 0.55000 0.50000 0.65000 0.40000 0.5500Time(s) 5.25400 34.68700 3.15700 16.24100 0.92200 2.782001989/022004/01 2004/02 2005/09 MAPE 0.07468 0.06370 0.11056 0.07925 0.21223 0.06497RMSE 4.61656 3.73852 7.35495 4.43963 13.24985 3.79178Dstat 0.30000 0.60000 0.30000 0.55000 0.30000 0.55000Time(s) 5.20400 33.31100 3.13000 16.58500 0.93800 2.814001989/122004/11 2004/12 2006/07 MAPE 0.05960 0.05604 0.12910 0.06974 0.05835 0.05621RMSE 4.22660 3.86493 9.84207 4.61297 3.95064 3.87414Dstat 0.40000 0.75000 0.65000 0.55000 0.55000 0.75000Time(s) 5.17300 33.63400 3.11000 17.23500 0.93700 3.032001994/022009/01 2009/02 2010/09 MAPE 0.07065 0.06586 0.07735 0.06558 0.06817 0.06101RMSE 5.82473 5.60917 5.98303 5.80642 5.66098 5.17863Dstat 0.50000 0.55000 0.55000 0.55000 0.55000 0.65000Time(s) 5.07900 33.74300 3.10900 16.61100 0.91000 2.828001999/122014/11 2014/12 2016/7 MAPE 0.10538 0.09723 0.12895 0.10668 0.10975 0.09711RMSE 5.93489 5.66978 6.82489 6.08115 6.08490 5.65323Dstat 0.65000 0.65000 0.40000 0.70000 0.65000 0.65000Time(s) 5.08400 33.51300 3.09400 16.29000 0.98400 2.84600Average value MAPE 0.07647 0.07003 0.10432 0.07788 0.10382 0.06916RMSE 4.60547 4.25353 6.52349 4.66344 6.26923 4.17084Dstat 0.45000 0.62000 0.48000 0.60000 0.49000 0.63000Time(s) 5.15880 33.77760 3.12000 16.59240 0.93820 2.86040(The value in boldface represents the best performance amongst 6 models in terms of MAPE, RMSE and Dstat.)M. Wang et al. Applied Energy 220 (2018) 480495493an arti ficial intelligence algorithm to construct new hybrid predictionmodels and further enhance forecasting accuracy.AcknowledgmentsThe Research was supported by the following foundations: TheNational Natural Science Foundation of China (71503132, 71690242,91546118, 11731014, 71403105, 61403171, 61603011), Qing LanProject of Jiangsu Province (2017), University Natural ScienceFoundation of Jiangsu Province (14KJA110001), Jiangsu Center forCollaborative Innovation in Geographical Information ResourceDevelopment and Application, the program of China ScholarshipCouncil (No. 201606770023). China Postdoctoral Science Foundation(2016M590973), Shanxi Postdoctoral Research Foundation. The BostonUniversity Center for Polymer Studies is supported by NSF Grants PHY-1505000, CMMI-1125290, and CHE-1213217, by DTRA GrantHDTRA1-14-1-0017, and by DOE Contract DE-AC07-05Id14517.Table 5DM test results for DFN-AI models and their single benchmark models.Data type Tested model Reference modelDFN-RBF DFN-ELM BPNN RBFNN ELMDaily data DFN-BP 1.16810(0.13640)0.43311(0.33760)2.66890(0.01284)DFN-RBF 0.71283(0.75300)1.92100(0.04346)DFN-ELM 1.90400(0.04236)BPNN 0.63057(0.27200)1.17000(0.13600)RBFNN 1.17680(0.12692)Weekly data DFN-BP 0.68571 (0.25510) 1.63160 (0.93140) 1.82150 (0.04863)DFN-RBF 1.50070 (0.91620) 2.37900 (0.02065)DFN-ELM 2.07670 (0.03381)BPNN 0.81490 (0.15146) 0.87941 (0.20100)RBFNN 1.31400 (0.88930)Monthly data DFN-BP 1.03130 (0.12864) 0.93865 (0.79950) 2.48630 (0.03388)DFN-RBF 3.83510 (0.99070) 1.89920 (0.06518)DFN-ELM 1.83920 (0.06910)BPNN 1.05030 (0.12697) 0.95408 (0.19700)RBFNN 0.13606 (0.55080)Fig. 11. The prediction results of the single AI models and DFN-AI models under different training set size.M. Wang et al. Applied Energy 220 (2018) 480495494References[1] Zhang JL, Zhang YJ, Zhang L. A novel hybrid method for crude oil price forecasting.Energy Econ 2015;49:64959.[2] Kilian L. Not all oil price shocks are alike: disentangling demand and supply shocksin the crude oil market. Am Econ Rev 2009;99(3):1053 69.[3] Lizardo RA, Mollick AV. Oil price fluctuations and US dollar exchange rates. EnergyEcon 2010;32(2):399408 .[4] Kilian L, Murphy DP. The role of inventories and speculative trading in the globalmarket for crude oil. J Appl Economet 2014;29(3):45478.[5] Sahir MH, Qureshi AH. Specifi c concerns of Pakistan in the context of energy se-curity issues and geopolitics of the region. Energy Policy 2007;35(4):20317.[6] Ramsay KW. Revisiting the resource curse: natural disasters, the price of oil, anddemocracy. Int Organ 2011;65(3):507 29.[7] Lanza A, Manera M, Giovannini M. Modeling and forecasting cointegrated re-lationships among heavy oil and product prices. Energy Econ 2005;27(6):831 48.[8] Murat A, Tokat E. Forecasting oil price movements with crack spread futures.Energy Econ 2009;31(1):8590.[9] Baumeister C, Kilian L. Real-time forecasts of the real price of oil. J Bus Econ Stat2012;30(2):326 36.[10] Xiang Y, Zhuang XH. Application of ARIMA model in short-term prediction of in-ternational crude oil price. Trans Tech Publicat 2013;798:979 82.[11] Sadorsky P. Modeling and forecasting petroleum futures volatility. Energy Econ2006;28(4):467 88.[12] Fan Y, Zhang YJ, Tsai HT, et al. EstimatingValue at Risk of crude oil price and itsspillover e ff ect using the GED-GARCH approach. Energy Econ 2008;30(6):3156 71.[13] Kang SH, Kang SM, Yoon SM. Forecasting volatility of crude oil markets. EnergyEcon 2009;31(1):11925.[14] Mohammadi H, Su L. International evidence on crude oil price dynamics: applica-tions of ARIMA-GARCH models[J]. Energy Econ 2010;32(5):1001 8.[15] Hou A, Suardi S. A nonparametric GARCH model of crude oil price return volatility.Energy Econ 2012;34(2):618 26.[16] Moshiri S, Foroutan F. Forecasting nonlinear crude oil futures prices. Energy J2006:81 95.[17] Kaboudan MA. Compumetric forecasting of crude oil prices[C]//EvolutionaryComputation, 2001. In: Proceedings of the 2001 congress on. IEEE; 2001, 1, p.283 7.[18] Mostafa MM, El-Masry AA. Oil price forecasting using gene expression program-ming and artifi cial neural networks. Econ Model 2016;54:4053.[19] Kaboli SHA, Selvaraj J, Rahim NA. Long-term electric energy consumption fore-casting via arti fi cial cooperative search algorithm. Energy 2016;115:85771.[20] Kaboli SHA, Fallahpour A, Selvaraj J, et al. Long-term electrical energy consump-tion formulating and forecasting via optimized gene expression programming.Energy 2017;126:144 64.[21] Xie W, Yu L, Xu S, et al. A new method for crude oil price forecasting based onsupport vector machines. Computational Science ICCS 2006; 2006. p. 44451.[22] Shin H, Hou T, Park K, et al. Prediction of movement direction in crude oil pricesbased on semi-supervised learning. Decis Supp Syst 2013;55(1):34858.[23] Yusof Y, Musta ffa Z. A review on optimization of least squares support vectormachine for time series forecasting. Int J Artif Intell & Applications2016;7(2):35 49.[24] Zhao Y, Li J, Yu L. A deep learning ensemble approach for crude oil price fore-casting. Energy Econ 2017;66:916.[25] Yu L, Wang S, Lai KK. Forecasting crude oil price with an EMD-based neural net-work ensemble learning paradigm. Energy Econ 2008;30(5):262335.[26] Jammazi R, Aloui C. Crude oil price forecasting: experimental evidence from wa-velet decomposition and neural network modeling. Energy Econ2012;34(3):828 41.[27] Xiong T, Bao Y, Hu Z. Beyond one-step-ahead forecasting: evaluation of alternativemulti-step-ahead forecasting models for crude oil prices. Energy Econ2013;40:405 15.[28] Yu L, Dai W, Tang L. A novel decomposition ensemble model with extended ex-treme learning machine for crude oil price forecasting. Eng Appl Artif Intell2016;47:110 21.[29] Yu L, Zhao Y, Tang L. A compressed sensing based AI learning paradigm for crudeoil price forecasting. Energy Econ 2014;46:23645.[30] Chiroma H, Abdulkareem S, Herawan T. Evolutionary neural network model forWest Texas intermediate crude oil price prediction. Appl Energy 2015;142:26673.[31] Wang S, Yu L, Lai KK. A novel hybrid AI system framework for crude oil priceforecasting[M]//data mining and knowledge management. Berlin, Heidelberg:Springer; 2005. p. 23342.[32] SangYF, Wang D, Wu JC, et al. Entropy-based wavelet de-noising method for timeseries analysis. Entropy 2009;11(4):112347.[33] He K, Lai KK, Yen J. A hybrid slantlet denoising least squares support vector re-gression model for exchange rate prediction. Proc Comput Sci 2010;1(1):2397405 .[34] De Faria EL, Albuquerque MP, Gonzalez JL, et al. Predicting the Brazilian stockmarket through neural networks and adaptive exponential smoothing methods. ExpSyst Appl 2009;36(10):125069.[35] Nasseri M, Moeini A, Tabesh M. Forecasting monthly urban water demand usingExtended Kalman Filter and Genetic Programming. Exp Syst Appl2011;38(6):738795.[36] Lacasa L, Luque B, Ballesteros F, et al. From time series to complex networks: thevisibility graph. Proc Natl Acad Sci 2008;105(13):4972 5.[37] Xu X, Zhang J, Small M. Superfamily phenomena and motifs of networks inducedfrom time series. Proc Natl Acad Sci 2008;105(50):19601 5.[38] Zhang J, Small M. Complex network from pseudo-periodic time series: topologyversus dynamics. Phys Rev Lett 2006;96(23):238701 .[39] Wang M, Tian L. From time series to complex networks: the phase space coarsegraining. Phys A: Stat Mech Its Appl 2016;461:456 68.[40] Wang M, Tian L. Regulating effect of the energy market Theoretical and empiricalanalysis based on a novel energy prices energy supplyeconomic growth dynamicsystem. Appl Energy 2015;155:526 46.[41] Du R, Wang Y, Dong G, et al. A complex network perspective on interrelations andevolution features of international oil trade, 20022013. Appl Energy2017;196:14251.[42] Du R, Dong G, Tian L, et al. Spatiotemporal dynamics and fi tness analysis of globaloil market: based on complex network. PloS One 2016;11(10):e0162362.[43] Wang M, Chen Y, Tian L, et al. Fluctuation behavior analysis of international crudeoil and gasoline price based on complex network perspective. Appl Energy2016;175:10927.[44] Wang M, Tian L, Du R. Research on the interaction patterns among the global crudeoil import dependency countries: a complex network approach. Appl Energy2016;180:77991.[45] Wang M, Tian L, Xu H, et al. Systemic risk and spatiotemporal dynamics of theconsumer market of China. Phys A: Stat Mech Its Appl 2017;473:188 204 .[46] ChenH, Tian L, Wang M, et al. Analysis of the dynamic evolutionary behavior ofAmerican heating oil spot and futures price fluctuation networks. Sustainability2017;9(4):574.[47] An H, Zhong W, Chen Y, et al. Features and evolution of international crude oiltrade relationships: a trading-based network analysis. Energy 2014;74:2549.[48] An H, Gao X, Fang W, et al. Research on patterns in thefluctuation of the co-movement between crude oil futures and spot prices: a complex network approach.Appl Energy 2014;136:106775.[49] Gao X, An H, Fang W, et al. The transmission of fluctuant patterns of the forexburden based on international crude oil prices. Energy 2014;73:3806.[50] Huang S, An H, Gao X, et al. Identifying the multiscale impacts of crude oil priceshocks on the stock market in China at the sector level. Phys A: Stat Mech Its Appl2015;434:13 24.[51] Jia X, An H, Fang W, et al. How do correlations of crude oil prices co-move? A greycorrelation-based wavelet perspective. Energy Econ 2015;49:588 98.[52] Jin W, Li Z J, Wei L S, et al. The improvements of BP neural network learningalgorithm[C]. In: Signal processing proceedings, 2000. WCCC-ICSP 2000. 5thInternational conference on IEEE; 2000, 3, p. 1647 9.[53] Er MJ, Wu S, Lu J, et al. Face recognition with radial basis function (RBF) neuralnetworks. IEEE Trans Neural Networks 2002;13(3):697 710 .[54] Huang GB, Zhu QY, Siew CK. Extreme learning machine: theory and applications.Neurocomputing 2006;70(1):489501 .[55] Cao J, Zhang K, Luo M, et al. Extreme learning machine and adaptive sparse re-presentation for image classi fi cation. Neural Networks 2016;81:91102 .[56] Diebold FX, Mariano RS. Comparing predictive accuracy. J Bus Econ Stat2002;20(1):134 44.[57] Hastie T, Tibshirani R, Friedman J. Neural networks, the elements of statisticallearning: data mining, inference, and prediction. New York, NY: Springer New York;2009. p. 389 416 .M. Wang et al. Applied Energy 220 (2018) 480495495

 

[返回]
上一篇:利用网格卷积特征的三维形变目标分类
下一篇:集成偏好的高维多目标最优软件产品选择算法