HOME 首页
SERVICE 服务产品
XINMEITI 新媒体代运营
CASE 服务案例
NEWS 热点资讯
ABOUT 关于我们
CONTACT 联系我们
创意岭
让品牌有温度、有情感
专注品牌策划15年

    PSO算法优化BP神经网络代码(bp神经网络的优化算法)

    发布时间:2023-04-13 14:24:48     稿源: 创意岭    阅读: 148        

    大家好!今天让创意岭的小编来大家介绍下关于PSO算法优化BP神经网络代码的问题,以下是小编对此问题的归纳整理,让我们一起来看看吧。

    开始之前先推荐一个非常厉害的Ai人工智能工具,一键生成原创文章、方案、文案、工作计划、工作报告、论文、代码、作文、做题和对话答疑等等

    只需要输入关键词,就能返回你想要的内容,越精准,写出的就越详细,有微信小程序端、在线网页版、PC客户端

    官网:https://ai.de1919.com

    创意岭作为行业内优秀的企业,服务客户遍布全球各地,如需了解SEO相关业务请拨打电话175-8598-2043,或添加微信:1454722008

    本文目录:

    PSO算法优化BP神经网络代码(bp神经网络的优化算法)

    一、c++ bp神经网络 自己写了个程序,好像有点问题,精度很差,求大神帮助

    牛逼呀!要不你再写个pso什么的程序优化一下……

    二、求基于BP神经网络的图像复原算法的matlab代码

    function Solar_SAE

    tic;

    n = 300;

    m=20;

    train_x = [];

    test_x = [];

    for i = 1:n

    %filename = strcat(['D:\Program Files\MATLAB\R2012a\work\DeepLearn\Solar_SAE\64_64_3train\' num2str(i,'%03d') '.bmp']);

    %filename = strcat(['E:\matlab\work\c0\TrainImage' num2str(i,'%03d') '.bmp']);

    filename = strcat(['E:\image restoration\3-(' num2str(i) ')-4.jpg']);

    b = imread(filename);

    %c = rgb2gray(b);

    c=b;

    [ImageRow ImageCol] = size(c);

    c = reshape(c,[1,ImageRow*ImageCol]);

    train_x = [train_x;c];

    end

    for i = 1:m

    %filename = strcat(['D:\Program Files\MATLAB\R2012a\work\DeepLearn\Solar_SAE\64_64_3test\' num2str(i,'%03d') '.bmp']);

    %filename = strcat(['E:\matlab\work\c0\TestImage' num2str(i+100,'%03d') '-1.bmp']);

    filename = strcat(['E:\image restoration\3-(' num2str(i+100) ').jpg']);

    b = imread(filename);

    %c = rgb2gray(b);

    c=b;

    [ImageRow ImageCol] = size(c);

    c = reshape(c,[1,ImageRow*ImageCol]);

    test_x = [test_x;c];

    end

    train_x = double(train_x)/255;

    test_x = double(test_x)/255;

    %train_y = double(train_y);

    %test_y = double(test_y);

    % Setup and train a stacked denoising autoencoder (SDAE)

    rng(0);

    %sae = saesetup([4096 500 200 50]);

    %sae.ae{1}.activation_function = 'sigm';

    %sae.ae{1}.learningRate = 0.5;

    %sae.ae{1}.inputZeroMaskedFraction = 0.0;

    %sae.ae{2}.activation_function = 'sigm';

    %sae.ae{2}.learningRate = 0.5

    %%sae.ae{2}.inputZeroMaskedFraction = 0.0;

    %sae.ae{3}.activation_function = 'sigm';

    %sae.ae{3}.learningRate = 0.5;

    %sae.ae{3}.inputZeroMaskedFraction = 0.0;

    %sae.ae{4}.activation_function = 'sigm';

    %sae.ae{4}.learningRate = 0.5;

    %sae.ae{4}.inputZeroMaskedFraction = 0.0;

    %opts.numepochs = 10;

    %opts.batchsize = 50;

    %sae = saetrain(sae, train_x, opts);

    %visualize(sae.ae{1}.W{1}(:,2:end)');

    % Use the SDAE to initialize a FFNN

    nn = nnsetup([4096 1500 500 200 50 200 500 1500 4096]);

    nn.activation_function = 'sigm';

    nn.learningRate = 0.03;

    nn.output = 'linear'; % output unit 'sigm' (=logistic), 'softmax' and 'linear'

    %add pretrained weights

    %nn.W{1} = sae.ae{1}.W{1};

    %nn.W{2} = sae.ae{2}.W{1};

    %nn.W{3} = sae.ae{3}.W{1};

    %nn.W{4} = sae.ae{3}.W{2};

    %nn.W{5} = sae.ae{2}.W{2};

    %nn.W{6} = sae.ae{1}.W{2};

    %nn.W{7} = sae.ae{2}.W{2};

    %nn.W{8} = sae.ae{1}.W{2};

    % Train the FFNN

    opts.numepochs = 30;

    opts.batchsize = 150;

    tx = test_x(14,:);

    nn1 = nnff(nn,tx,tx);

    ty1 = reshape(nn1.a{9},64,64);

    nn = nntrain(nn, train_x, train_x, opts);

    toc;

    tic;

    nn2 = nnff(nn,tx,tx);

    toc;

    tic;

    ty2 = reshape(nn2.a{9},64,64);

    tx = reshape(tx,64,64);

    tz = tx - ty2;

    tz = im2bw(tz,0.1);

    %imshow(tx);

    %figure,imshow(ty2);

    %figure,imshow(tz);

    ty = cat(2,tx,ty2,tz);

    montage(ty);

    filename3 = strcat(['E:\image restoration\3.jpg']);

    e=imread(filename3);

    f= rgb2gray(e);

    f=imresize(f,[64,64]);

    %imshow(ty2);

    f=double (f)/255;

    [PSNR, MSE] = psnr(ty2,f)

    imwrite(ty2,'E:\image restoration\bptest.jpg','jpg');

    toc;

    %visualize(ty);

    %[er, bad] = nntest(nn, tx, tx);

    %assert(er < 0.1, 'Too big error');

    三、采用bp神经网络建立了一个锅炉参数的数学模型,然后我用粒子群算法对参数进行寻优,优化的目标函数是锅

    PSO只是训练网络的方法,最终PSO结束时,适应度最大的粒子代表的网络就是最佳的粒子,也就是你训练完成的网络。你说每次的结果都不一样,这是肯定的,因为PSO是中随机搜索算法,它的初值、搜索过程都是随机的,既然是随机的,那肯定每次训练结果都不一样。你又问哪组解最好,不是定义了适应度函数吗,每次训练给出的最佳粒子,肯定是本次训练的最佳。这个结果每次都是不一样的,但是它们都是本次训练的最佳结果,你随意拿任意一组结果都行。

    四、matlab BP神经网络的训练算法中训练函数(traingdm 、trainlm、trainbr)的实现过程及相应的VC源代码

    VC源代码?你很搞笑嘛。。

    给你trainlm的m码

    function [out1,out2] = trainlm(varargin)

    %TRAINLM Levenberg-Marquardt backpropagation.

    %

    % <a href="matlab:doc trainlm">trainlm</a> is a network training function that updates weight and

    % bias states according to Levenberg-Marquardt optimization.

    %

    % <a href="matlab:doc trainlm">trainlm</a> is often the fastest backpropagation algorithm in the toolbox,

    % and is highly recommended as a first choice supervised algorithm,

    % although it does require more memory than other algorithms.

    %

    % [NET,TR] = <a href="matlab:doc trainlm">trainlm</a>(NET,X,T) takes a network NET, input data X

    % and target data T and returns the network after training it, and a

    % a training record TR.

    %

    % [NET,TR] = <a href="matlab:doc trainlm">trainlm</a>(NET,X,T,Xi,Ai,EW) takes additional optional

    % arguments suitable for training dynamic networks and training with

    % error weights. Xi and Ai are the initial input and layer delays states

    % respectively and EW defines error weights used to indicate

    % the relative importance of each target value.

    %

    % Training occurs according to training parameters, with default values.

    % Any or all of these can be overridden with parameter name/value argument

    % pairs appended to the input argument list, or by appending a structure

    % argument with fields having one or more of these names.

    % show 25 Epochs between displays

    % showCommandLine 0 generate command line output

    % showWindow 1 show training GUI

    % epochs 100 Maximum number of epochs to train

    % goal 0 Performance goal

    % max_fail 5 Maximum validation failures

    % min_grad 1e-10 Minimum performance gradient

    % mu 0.001 Initial Mu

    % mu_dec 0.1 Mu decrease factor

    % mu_inc 10 Mu increase factor

    % mu_max 1e10 Maximum Mu

    % time inf Maximum time to train in seconds

    %

    % To make this the default training function for a network, and view

    % and/or change parameter settings, use these two properties:

    %

    % net.<a href="matlab:doc nnproperty.net_trainFcn">trainFcn</a> = 'trainlm';

    % net.<a href="matlab:doc nnproperty.net_trainParam">trainParam</a>

    %

    % See also trainscg, feedforwardnet, narxnet.

    % Mark Beale, 11-31-97, ODJ 11/20/98

    % Updated by Orlando De Jes鷖, Martin Hagan, Dynamic Training 7-20-05

    % Copyright 1992-2010 The MathWorks, Inc.

    % $Revision: 1.1.6.11.2.2 $ $Date: 2010/07/23 15:40:16 $

    %% =======================================================

    % BOILERPLATE_START

    % This code is the same for all Training Functions.

    persistent INFO;

    if isempty(INFO), INFO = get_info; end

    nnassert.minargs(nargin,1);

    in1 = varargin{1};

    if ischar(in1)

    switch (in1)

    case 'info'

    out1 = INFO;

    case 'check_param'

    nnassert.minargs(nargin,2);

    param = varargin{2};

    err = nntest.param(INFO.parameters,param);

    if isempty(err)

    err = check_param(param);

    end

    if nargout > 0

    out1 = err;

    elseif ~isempty(err)

    nnerr.throw('Type',err);

    end

    otherwise,

    try

    out1 = eval(['INFO.' in1]);

    catch me, nnerr.throw(['Unrecognized first argument: ''' in1 ''''])

    end

    end

    return

    end

    nnassert.minargs(nargin,2);

    net = nn.hints(nntype.network('format',in1,'NET'));

    oldTrainFcn = net.trainFcn;

    oldTrainParam = net.trainParam;

    if ~strcmp(net.trainFcn,mfilename)

    net.trainFcn = mfilename;

    net.trainParam = INFO.defaultParam;

    end

    [args,param] = nnparam.extract_param(varargin(2:end),net.trainParam);

    err = nntest.param(INFO.parameters,param);

    if ~isempty(err), nnerr.throw(nnerr.value(err,'NET.trainParam')); end

    if INFO.isSupervised && isempty(net.performFcn) % TODO - fill in MSE

    nnerr.throw('Training function is supervised but NET.performFcn is undefined.');

    end

    if INFO.usesGradient && isempty(net.derivFcn) % TODO - fill in

    nnerr.throw('Training function uses derivatives but NET.derivFcn is undefined.');

    end

    if net.hint.zeroDelay, nnerr.throw('NET contains a zero-delay loop.'); end

    [X,T,Xi,Ai,EW] = nnmisc.defaults(args,{},{},{},{},{1});

    X = nntype.data('format',X,'Inputs X');

    T = nntype.data('format',T,'Targets T');

    Xi = nntype.data('format',Xi,'Input states Xi');

    Ai = nntype.data('format',Ai,'Layer states Ai');

    EW = nntype.nndata_pos('format',EW,'Error weights EW');

    % Prepare Data

    [net,data,tr,~,err] = nntraining.setup(net,mfilename,X,Xi,Ai,T,EW);

    if ~isempty(err), nnerr.throw('Args',err), end

    % Train

    net = struct(net);

    fcns = nn.subfcns(net);

    [net,tr] = train_network(net,tr,data,fcns,param);

    tr = nntraining.tr_clip(tr);

    if isfield(tr,'perf')

    tr.best_perf = tr.perf(tr.best_epoch+1);

    end

    if isfield(tr,'vperf')

    tr.best_vperf = tr.vperf(tr.best_epoch+1);

    end

    if isfield(tr,'tperf')

    tr.best_tperf = tr.tperf(tr.best_epoch+1);

    end

    net.trainFcn = oldTrainFcn;

    net.trainParam = oldTrainParam;

    out1 = network(net);

    out2 = tr;

    end

    % BOILERPLATE_END

    %% =======================================================

    % TODO - MU => MU_START

    % TODO - alternate parameter names (i.e. MU for MU_START)

    function info = get_info()

    info = nnfcnTraining(mfilename,'Levenberg-Marquardt',7.0,true,true,...

    [ ...

    nnetParamInfo('showWindow','Show Training Window Feedback','nntype.bool_scalar',true,...

    'Display training window during training.'), ...

    nnetParamInfo('showCommandLine','Show Command Line Feedback','nntype.bool_scalar',false,...

    'Generate command line output during training.'), ...

    nnetParamInfo('show','Command Line Frequency','nntype.strict_pos_int_inf_scalar',25,...

    'Frequency to update command line.'), ...

    ...

    nnetParamInfo('epochs','Maximum Epochs','nntype.pos_int_scalar',1000,...

    'Maximum number of training iterations before training is stopped.'), ...

    nnetParamInfo('time','Maximum Training Time','nntype.pos_inf_scalar',inf,...

    'Maximum time in seconds before training is stopped.'), ...

    ...

    nnetParamInfo('goal','Performance Goal','nntype.pos_scalar',0,...

    'Performance goal.'), ...

    nnetParamInfo('min_grad','Minimum Gradient','nntype.pos_scalar',1e-5,...

    'Minimum performance gradient before training is stopped.'), ...

    nnetParamInfo('max_fail','Maximum Validation Checks','nntype.strict_pos_int_scalar',6,...

    'Maximum number of validation checks before training is stopped.'), ...

    ...

    nnetParamInfo('mu','Mu','nntype.pos_scalar',0.001,...

    'Mu.'), ...

    nnetParamInfo('mu_dec','Mu Decrease Ratio','nntype.real_0_to_1',0.1,...

    'Ratio to decrease mu.'), ...

    nnetParamInfo('mu_inc','Mu Increase Ratio','nntype.over1',10,...

    'Ratio to increase mu.'), ...

    nnetParamInfo('mu_max','Maximum mu','nntype.strict_pos_scalar',1e10,...

    'Maximum mu before training is stopped.'), ...

    ], ...

    [ ...

    nntraining.state_info('gradient','Gradient','continuous','log') ...

    nntraining.state_info('mu','Mu','continuous','log') ...

    nntraining.state_info('val_fail','Validation Checks','discrete','linear') ...

    ]);

    end

    function err = check_param(param)

    err = '';

    end

    function [net,tr] = train_network(net,tr,data,fcns,param)

    % Checks

    if isempty(net.performFcn)

    warning('nnet:trainlm:Performance',nnwarning.empty_performfcn_corrected);

    net.performFcn = 'mse';

    net.performParam = mse('defaultParam');

    tr.performFcn = net.performFcn;

    tr.performParam = net.performParam;

    end

    if isempty(strmatch(net.performFcn,{'sse','mse'},'exact'))

    warning('nnet:trainlm:Performance',nnwarning.nonjacobian_performfcn_replaced);

    net.performFcn = 'mse';

    net.performParam = mse('defaultParam');

    tr.performFcn = net.performFcn;

    tr.performParam = net.performParam;

    end

    % Initialize

    startTime = clock;

    original_net = net;

    [perf,vperf,tperf,je,jj,gradient] = nntraining.perfs_jejj(net,data,fcns);

    [best,val_fail] = nntraining.validation_start(net,perf,vperf);

    WB = getwb(net);

    lengthWB = length(WB);

    ii = sparse(1:lengthWB,1:lengthWB,ones(1,lengthWB));

    mu = param.mu;

    % Training Record

    tr.best_epoch = 0;

    tr.goal = param.goal;

    tr.states = {'epoch','time','perf','vperf','tperf','mu','gradient','val_fail'};

    % Status

    status = ...

    [ ...

    nntraining.status('Epoch','iterations','linear','discrete',0,param.epochs,0), ...

    nntraining.status('Time','seconds','linear','discrete',0,param.time,0), ...

    nntraining.status('Performance','','log','continuous',perf,param.goal,perf) ...

    nntraining.status('Gradient','','log','continuous',gradient,param.min_grad,gradient) ...

    nntraining.status('Mu','','log','continuous',mu,param.mu_max,mu) ...

    nntraining.status('Validation Checks','','linear','discrete',0,param.max_fail,0) ...

    ];

    nn_train_feedback('start',net,status);

    % Train

    for epoch = 0:param.epochs

    % Stopping Criteria

    current_time = etime(clock,startTime);

    [userStop,userCancel] = nntraintool('check');

    if userStop, tr.stop = 'User stop.'; net = best.net;

    elseif userCancel, tr.stop = 'User cancel.'; net = original_net;

    elseif (perf <= param.goal), tr.stop = 'Performance goal met.'; net = best.net;

    elseif (epoch == param.epochs), tr.stop = 'Maximum epoch reached.'; net = best.net;

    elseif (current_time >= param.time), tr.stop = 'Maximum time elapsed.'; net = best.net;

    elseif (gradient <= param.min_grad), tr.stop = 'Minimum gradient reached.'; net = best.net;

    elseif (mu >= param.mu_max), tr.stop = 'Maximum MU reached.'; net = best.net;

    elseif (val_fail >= param.max_fail), tr.stop = 'Validation stop.'; net = best.net;

    end

    % Feedback

    tr = nntraining.tr_update(tr,[epoch current_time perf vperf tperf mu gradient val_fail]);

    nn_train_feedback('update',net,status,tr,data, ...

    [epoch,current_time,best.perf,gradient,mu,val_fail]);

    % Stop

    if ~isempty(tr.stop), break, end

    % Levenberg Marquardt

    while (mu <= param.mu_max)

    % CHECK FOR SINGULAR MATRIX

    [msgstr,msgid] = lastwarn;

    lastwarn('MATLAB:nothing','MATLAB:nothing')

    warnstate = warning('off','all');

    dWB = -(jj+ii*mu) \ je;

    [~,msgid1] = lastwarn;

    flag_inv = isequal(msgid1,'MATLAB:nothing');

    if flag_inv, lastwarn(msgstr,msgid); end;

    warning(warnstate)

    WB2 = WB + dWB;

    net2 = setwb(net,WB2);

    perf2 = nntraining.train_perf(net2,data,fcns);

    % TODO - possible speed enhancement

    % - retain intermediate variables for Memory Reduction = 1

    if (perf2 < perf) && flag_inv

    WB = WB2; net = net2;

    mu = max(mu*param.mu_dec,1e-20);

    break

    end

    mu = mu * param.mu_inc;

    end

    % Validation

    [perf,vperf,tperf,je,jj,gradient] = nntraining.perfs_jejj(net,data,fcns);

    [best,tr,val_fail] = nntraining.validation(best,tr,val_fail,net,perf,vperf,epoch);

    end

    end

    以上就是关于PSO算法优化BP神经网络代码相关问题的回答。希望能帮到你,如有更多相关问题,您也可以联系我们的客服进行咨询,客服也会为您讲解更多精彩的知识和内容。


    推荐阅读:

    psp排行榜(psp 排行)

    ppsu吸管杯排行榜(ppsu吸管杯好不好)

    ps如何将荣誉证书改名字(ps如何将荣誉证书改名字呢)

    自己做模型卖犯法吗(做模型卖能赚钱吗)

    服装店店铺介绍(服装店店铺介绍文案)