A single-machine scheduling with preventive periodic maintenance activities in a remanufacturing system including resumable and non-resumable jobs is studied.The objective is to find a schedule to minimize the makespa...A single-machine scheduling with preventive periodic maintenance activities in a remanufacturing system including resumable and non-resumable jobs is studied.The objective is to find a schedule to minimize the makespan and an LPT-LS algorithm is proposed.Non-resumable jobs are first scheduled in a machine by the longest processing time(LPT) rule,and then resumable jobs are scheduled by the list scheduling(LS) rule.And the worst-case ratios of this algorithm in three different cases in terms of the value of the total processing time of the resumable jobs(denoted as S2) are discussed.When S2 is longer than the spare time of the machine after the non-resumable jobs are assigned by the LPT rule,it is equal to 1.When S2 falls in between the spare time of the machine by the LPT rule and the optimal schedule rule,it is less than 2.When S2 is less than the spare time of the machine by the optimal schedule rule,it is less than 2.Finally,numerical examples are presented for verification.展开更多
In this paper we consider a single-machine scheduling model with deteriorating jobs and simultaneous learning, and we introduce polynomial solutions for single machine makespan minimization, total flow times minimizat...In this paper we consider a single-machine scheduling model with deteriorating jobs and simultaneous learning, and we introduce polynomial solutions for single machine makespan minimization, total flow times minimization and maximum lateness minimization corresponding to the first and second special cases of our model under some agreeable conditions. However, corresponding to the third special case of our model, we show that the optimal schedules may be different from those of the classical version for the above objective functions.展开更多
In a CPM network, the longest path problem is one of the most important subjects. According to the intrinsic principle of CPM network, the length of the paths between arbitrary two nodes is presented. Furthermore, the...In a CPM network, the longest path problem is one of the most important subjects. According to the intrinsic principle of CPM network, the length of the paths between arbitrary two nodes is presented. Furthermore, the length of the longest path from start node to arbitrary node and from arbitrary node to end node is proposed. In view of a scheduling problem of two activities with float in the CPM scheduling, we put forward Barycenter Theory and prove this theory based on the algorithm of the length of the longest path. By this theory, we know which activity should be done firstly. At last, we show our theory by an example.展开更多
Motivated by industrial applications we study a single-machine scheduling problem in which all the jobs are mutu- ally independent and available at time zero.The machine processes the jobs sequentially and it is not i...Motivated by industrial applications we study a single-machine scheduling problem in which all the jobs are mutu- ally independent and available at time zero.The machine processes the jobs sequentially and it is not idle if there is any job to be pro- cessed.The operation of each job cannot be interrupted.The machine cannot process more than one job at a time.A setup time is needed if the machine switches from one type of job to another.The objective is to find an optimal schedule with the minimal total jobs’completion time.While the sum of jobs’processing time is always a constant,the objective is to minimize the sum of setup times.Ant colony optimization(ACO)is a meta-heuristic that has recently been applied to scheduling problem.In this paper we propose an improved ACO-Branching Ant Colony with Dynamic Perturbation(DPBAC)algorithm for the single-machine schedul- ing problem.DPBAC improves traditional ACO in following aspects:introducing Branching Method to choose starting points;im- proving state transition rules;introducing Mutation Method to shorten tours;improving pheromone updating rules and introduc- ing Conditional Dynamic Perturbation Strategy.Computational results show that DPBAC algorithm is superior to the traditional ACO algorithm.展开更多
In a local search algorithm,one of its most important features is the definition of its neighborhood which is crucial to the algorithm's performance.In this paper,we present an analysis of neighborhood combination...In a local search algorithm,one of its most important features is the definition of its neighborhood which is crucial to the algorithm's performance.In this paper,we present an analysis of neighborhood combination search for solv-ing the single-machine scheduling problem with sequence-dependent setup time with the objective of minimizing total weighted tardiness(SMSWT).First,We propose a new neighborhood structure named Block Swap(B1)which can be con-sidered as an extension of the previously widely used Block Move(B2)neighborhood,and a fast incremental evaluation technique to enhance its evaluation efficiency.Second,based on the Block Swap and Block Move neighborhoods,we present two kinds of neighborhood structures:neighborhood union(denoted by B1UB2)and token-ring search(denoted by B1→B2),both of which are combinations of B1 and B2.Third,we incorporate the neighborhood union and token-ring search into two representative metaheuristic algorithms:the Iterated Local Search Algorithm(ILSnew)and the Hybrid Evolutionary Algorithm(HEA_(new))to investigate the performance of the neighborhood union and token-ring search.Exten-sive experiments show the competitiveness of the token-ring search combination mechanism of the two neighborhoods.Tested on the 120 public benchmark instances,our HEA_(new)has a highly competitive performance in solution quality and computational time compared with both the exact algorithms and recent metaheuristics.We have also tested the HEA,new algorithm with the selected neighborhood combination search to deal with the 64 public benchmark instances of the single-machine scheduling problem with sequence-dependent setup time.HEAnew is able to match the optimal or the best known results for all the 64 instances.In particular,the computational time for reaching the best well-known results for five chal-lenging instances is reduced by at least 61.25%.展开更多
In this paper,we consider the single-machine scheduling with step-deteriorating jobs and rejection.Each job is either rejected by paying a rejection penalty,or accepted and processed on the single machine,and the actu...In this paper,we consider the single-machine scheduling with step-deteriorating jobs and rejection.Each job is either rejected by paying a rejection penalty,or accepted and processed on the single machine,and the actual processing time of each accepted job is a step function of its starting time and the common deteriorating date.The objective is to minimize the makespan of the accepted jobs plus the total penalty of the rejected jobs.For the case of common deteriorating penalty,we first show that the problem is NP-hard in the ordinary sense.Then we present two pseudo-polynomial algorithms and a 2-approximation algorithm.Furthermore,we propose a fully polynomial time approximation scheme.For the case of common normal processing time,we present two pseudo-polynomial time algorithms,a 2-approximation algorithm and a fully polynomial time approximation scheme.展开更多
Additive manufacturing(AM)has attracted significant attention in recent years based on its wide range of applications and growing demand.AM offers the advantages of production flexibility and design freedom.In this st...Additive manufacturing(AM)has attracted significant attention in recent years based on its wide range of applications and growing demand.AM offers the advantages of production flexibility and design freedom.In this study,we considered a practical variant of the batch-processing-machine(BPM)scheduling problem that arises in AM industries,where an AM machine can process multiple parts simultaneously,as long as the twodimensional rectangular packing constraint is not violated.Based on the set-partitioning formulation of our mixed-integer programming(MIP)model,a branch-and-price(B&P)algorithm was developed by embedding a column-generation technique into a branchand-bound framework.Additionally,a novel labelling algorithm was developed to accelerate the column-generation process.Ours is the first study to provide a B&P algorithm to solve the BPM scheduling problem in the AM industry.We tested the performance of our algorithm using a modern MIP solver(Gurobi)and real data from a 3D printing factory.The results demonstrate that for most instances tested,our algorithm produces results similar or identical to those of Gurobi with reasonable computation time and outperforms Gurobi in terms of solution quality and running time on some large instances.展开更多
Thedeployment of the Internet of Things(IoT)with smart sensors has facilitated the emergence of fog computing as an important technology for delivering services to smart environments such as campuses,smart cities,and ...Thedeployment of the Internet of Things(IoT)with smart sensors has facilitated the emergence of fog computing as an important technology for delivering services to smart environments such as campuses,smart cities,and smart transportation systems.Fog computing tackles a range of challenges,including processing,storage,bandwidth,latency,and reliability,by locally distributing secure information through end nodes.Consisting of endpoints,fog nodes,and back-end cloud infrastructure,it provides advanced capabilities beyond traditional cloud computing.In smart environments,particularly within smart city transportation systems,the abundance of devices and nodes poses significant challenges related to power consumption and system reliability.To address the challenges of latency,energy consumption,and fault tolerance in these environments,this paper proposes a latency-aware,faulttolerant framework for resource scheduling and data management,referred to as the FORD framework,for smart cities in fog environments.This framework is designed to meet the demands of time-sensitive applications,such as those in smart transportation systems.The FORD framework incorporates latency-aware resource scheduling to optimize task execution in smart city environments,leveraging resources from both fog and cloud environments.Through simulation-based executions,tasks are allocated to the nearest available nodes with minimum latency.In the event of execution failure,a fault-tolerantmechanism is employed to ensure the successful completion of tasks.Upon successful execution,data is efficiently stored in the cloud data center,ensuring data integrity and reliability within the smart city ecosystem.展开更多
The distributed permutation flow shop scheduling problem(DPFSP)has received increasing attention in recent years.The iterated greedy algorithm(IGA)serves as a powerful optimizer for addressing such a problem because o...The distributed permutation flow shop scheduling problem(DPFSP)has received increasing attention in recent years.The iterated greedy algorithm(IGA)serves as a powerful optimizer for addressing such a problem because of its straightforward,single-solution evolution framework.However,a potential draw-back of IGA is the lack of utilization of historical information,which could lead to an imbalance between exploration and exploitation,especially in large-scale DPFSPs.As a consequence,this paper develops an IGA with memory and learning mechanisms(MLIGA)to efficiently solve the DPFSP targeted at the mini-malmakespan.InMLIGA,we incorporate a memory mechanism to make a more informed selection of the initial solution at each stage of the search,by extending,reconstructing,and reinforcing the information from previous solutions.In addition,we design a twolayer cooperative reinforcement learning approach to intelligently determine the key parameters of IGA and the operations of the memory mechanism.Meanwhile,to ensure that the experience generated by each perturbation operator is fully learned and to reduce the prior parameters of MLIGA,a probability curve-based acceptance criterion is proposed by combining a cube root function with custom rules.At last,a discrete adaptive learning rate is employed to enhance the stability of the memory and learningmechanisms.Complete ablation experiments are utilized to verify the effectiveness of the memory mechanism,and the results show that this mechanism is capable of improving the performance of IGA to a large extent.Furthermore,through comparative experiments involving MLIGA and five state-of-the-art algorithms on 720 benchmarks,we have discovered that MLI-GA demonstrates significant potential for solving large-scale DPFSPs.This indicates that MLIGA is well-suited for real-world distributed flow shop scheduling.展开更多
In this paper,a bilevel optimization model of an integrated energy operator(IEO)–load aggregator(LA)is constructed to address the coordinate optimization challenge of multiple stakeholder island integrated energy sys...In this paper,a bilevel optimization model of an integrated energy operator(IEO)–load aggregator(LA)is constructed to address the coordinate optimization challenge of multiple stakeholder island integrated energy system(IIES).The upper level represents the integrated energy operator,and the lower level is the electricity-heatgas load aggregator.Owing to the benefit conflict between the upper and lower levels of the IIES,a dynamic pricing mechanism for coordinating the interests of the upper and lower levels is proposed,combined with factors such as the carbon emissions of the IIES,as well as the lower load interruption power.The price of selling energy can be dynamically adjusted to the lower LA in the mechanism,according to the information on carbon emissions and load interruption power.Mutual benefits and win-win situations are achieved between the upper and lower multistakeholders.Finally,CPLEX is used to iteratively solve the bilevel optimization model.The optimal solution is selected according to the joint optimal discrimination mechanism.Thesimulation results indicate that the sourceload coordinate operation can reduce the upper and lower operation costs.Using the proposed pricingmechanism,the carbon emissions and load interruption power of IEO-LA are reduced by 9.78%and 70.19%,respectively,and the capture power of the carbon capture equipment is improved by 36.24%.The validity of the proposed model and method is verified.展开更多
Ship outfitting is a key process in shipbuilding.Efficient and high-quality ship outfitting is a top priority for modern shipyards.These activities are conducted at different stations of shipyards.The outfitting plan ...Ship outfitting is a key process in shipbuilding.Efficient and high-quality ship outfitting is a top priority for modern shipyards.These activities are conducted at different stations of shipyards.The outfitting plan is one of the crucial issues in shipbuilding.In this paper,production scheduling and material ordering with endogenous uncertainty of the outfitting process are investigated.The uncertain factors in outfitting equipment production are usually decision-related,which leads to difficulties in addressing uncertainties in the outfitting production workshops before production is conducted according to plan.This uncertainty is regarded as endogenous uncertainty and can be treated as non-anticipativity constraints in the model.To address this problem,a stochastic two-stage programming model with endogenous uncertainty is established to optimize the outfitting job scheduling and raw material ordering process.A practical case of the shipyard of China Merchants Heavy Industry Co.,Ltd.is used to evaluate the performance of the proposed method.Satisfactory results are achieved at the lowest expected total cost as the complete kit rate of outfitting equipment is improved and emergency replenishment is reduced.展开更多
We study single-machine scheduling problems with a single maintenance activity(MA)of length p0 under three types of assumptions:(A)the MA is required in a fixed time interval[T−p0,T]with T≥p0 and the job processing i...We study single-machine scheduling problems with a single maintenance activity(MA)of length p0 under three types of assumptions:(A)the MA is required in a fixed time interval[T−p0,T]with T≥p0 and the job processing is of preemptive and resumable;(B)the MA is required in a relaxed time interval[0,T]with T≥p0 and the job processing is of nonpreemptive;(C)the MA is required in a relaxed time interval[T0,T]with 0≤T0≤T−p0 and the job processing is of nonpreemptive.We show in this paper that,up to the time complexity for solving scheduling problems,assumptions(A)and(B)are equivalent,and moreover,if T−(T0+p0)is greater than or equal to the maximum processing time of all jobs,the assumption(C)is also equivalent to(A)and(B).As an application,we study the scheduling for minimizing the weighted number of tardy jobs under the above three assumptions,respectively,and present corresponding time-complexity results.展开更多
The single-machine lot scheduling problem with splittable jobs to minimize the number of tardy jobs has been showed to be weakly NP-hard in the literature.In this paper,we show that a generalized version of this probl...The single-machine lot scheduling problem with splittable jobs to minimize the number of tardy jobs has been showed to be weakly NP-hard in the literature.In this paper,we show that a generalized version of this problem in which jobs have deadlines is strongly NP-hard,and also present the results of some related scheduling problems.展开更多
Cloud computing provides a diverse and adaptable resource pool over the internet,allowing users to tap into various resources as needed.It has been seen as a robust solution to relevant challenges.A significant delay ...Cloud computing provides a diverse and adaptable resource pool over the internet,allowing users to tap into various resources as needed.It has been seen as a robust solution to relevant challenges.A significant delay can hamper the performance of IoT-enabled cloud platforms.However,efficient task scheduling can lower the cloud infrastructure’s energy consumption,thus maximizing the service provider’s revenue by decreasing user job processing times.The proposed Modified Chimp-Whale Optimization Algorithm called Modified Chimp-Whale Optimization Algorithm(MCWOA),combines elements of the Chimp Optimization Algorithm(COA)and the Whale Optimization Algorithm(WOA).To enhance MCWOA’s identification precision,the Sobol sequence is used in the population initialization phase,ensuring an even distribution of the population across the solution space.Moreover,the traditional MCWOA’s local search capabilities are augmented by incorporating the whale optimization algorithm’s bubble-net hunting and random search mechanisms into MCWOA’s position-updating process.This study demonstrates the effectiveness of the proposed approach using a two-story rigid frame and a simply supported beam model.Simulated outcomes reveal that the new method outperforms the original MCWOA,especially in multi-damage detection scenarios.MCWOA excels in avoiding false positives and enhancing computational speed,making it an optimal choice for structural damage detection.The efficiency of the proposed MCWOA is assessed against metrics such as energy usage,computational expense,task duration,and delay.The simulated data indicates that the new MCWOA outpaces other methods across all metrics.The study also references the Whale Optimization Algorithm(WOA),Chimp Algorithm(CA),Ant Lion Optimizer(ALO),Genetic Algorithm(GA)and Grey Wolf Optimizer(GWO).展开更多
To mitigate the impact of wind power volatility on power system scheduling,this paper adopts the wind-storage combined unit to improve the dispatchability of wind energy.And a three-level optimal scheduling and power ...To mitigate the impact of wind power volatility on power system scheduling,this paper adopts the wind-storage combined unit to improve the dispatchability of wind energy.And a three-level optimal scheduling and power allocation strategy is proposed for the system containing the wind-storage combined unit.The strategy takes smoothing power output as themain objectives.The first level is the wind-storage joint scheduling,and the second and third levels carry out the unit combination optimization of thermal power and the power allocation of wind power cluster(WPC),respectively,according to the scheduling power of WPC and ESS obtained from the first level.This can ensure the stability,economy and environmental friendliness of the whole power system.Based on the roles of peak shaving-valley filling and fluctuation smoothing of the energy storage system(ESS),this paper decides the charging and discharging intervals of ESS,so that the energy storage and wind power output can be further coordinated.Considering the prediction error and the output uncertainty of wind power,the planned scheduling output of wind farms(WFs)is first optimized on a long timescale,and then the rolling correction optimization of the scheduling output of WFs is carried out on a short timescale.Finally,the effectiveness of the proposed optimal scheduling and power allocation strategy is verified through case analysis.展开更多
With the introduction of the“dual carbon”goal and the continuous promotion of low-carbon development,the integrated energy system(IES)has gradually become an effective way to save energy and reduce emissions.This st...With the introduction of the“dual carbon”goal and the continuous promotion of low-carbon development,the integrated energy system(IES)has gradually become an effective way to save energy and reduce emissions.This study proposes a low-carbon economic optimization scheduling model for an IES that considers carbon trading costs.With the goal of minimizing the total operating cost of the IES and considering the transferable and curtailable characteristics of the electric and thermal flexible loads,an optimal scheduling model of the IES that considers the cost of carbon trading and flexible loads on the user side was established.The role of flexible loads in improving the economy of an energy system was investigated using examples,and the rationality and effectiveness of the study were verified through a comparative analysis of different scenarios.The results showed that the total cost of the system in different scenarios was reduced by 18.04%,9.1%,3.35%,and 7.03%,respectively,whereas the total carbon emissions of the system were reduced by 65.28%,20.63%,3.85%,and 18.03%,respectively,when the carbon trading cost and demand-side flexible electric and thermal load responses were considered simultaneously.Flexible electrical and thermal loads did not have the same impact on the system performance.In the analyzed case,the total cost and carbon emissions of the system when only the flexible electrical load response was considered were lower than those when only the flexible thermal load response was taken into account.Photovoltaics have an excess of carbon trading credits and can profit from selling them,whereas other devices have an excess of carbon trading and need to buy carbon credits.展开更多
Traditional optimal scheduling methods are limited to accurate physical models and parameter settings, which aredifficult to adapt to the uncertainty of source and load, and there are problems such as the inability to...Traditional optimal scheduling methods are limited to accurate physical models and parameter settings, which aredifficult to adapt to the uncertainty of source and load, and there are problems such as the inability to make dynamicdecisions continuously. This paper proposed a dynamic economic scheduling method for distribution networksbased on deep reinforcement learning. Firstly, the economic scheduling model of the new energy distributionnetwork is established considering the action characteristics of micro-gas turbines, and the dynamic schedulingmodel based on deep reinforcement learning is constructed for the new energy distribution network system with ahigh proportion of new energy, and the Markov decision process of the model is defined. Secondly, Second, for thechanging characteristics of source-load uncertainty, agents are trained interactively with the distributed networkin a data-driven manner. Then, through the proximal policy optimization algorithm, agents adaptively learn thescheduling strategy and realize the dynamic scheduling decision of the new energy distribution network system.Finally, the feasibility and superiority of the proposed method are verified by an improved IEEE 33-node simulationsystem.展开更多
Time-Sensitive Network(TSN)with deterministic transmission capability is increasingly used in many emerging fields.It mainly guarantees the Quality of Service(QoS)of applications with strict requirements on time and s...Time-Sensitive Network(TSN)with deterministic transmission capability is increasingly used in many emerging fields.It mainly guarantees the Quality of Service(QoS)of applications with strict requirements on time and security.One of the core features of TSN is traffic scheduling with bounded low delay in the network.However,traffic scheduling schemes in TSN are usually synthesized offline and lack dynamism.To implement incremental scheduling of newly arrived traffic in TSN,we propose a Dynamic Response Incremental Scheduling(DR-IS)method for time-sensitive traffic and deploy it on a software-defined time-sensitive network architecture.Under the premise of meeting the traffic scheduling requirements,we adopt two modes,traffic shift and traffic exchange,to dynamically adjust the time slot injection position of the traffic in the original scheme,and determine the sending offset time of the new timesensitive traffic to minimize the global traffic transmission jitter.The evaluation results show that DRIS method can effectively control the large increase of traffic transmission jitter in incremental scheduling without affecting the transmission delay,thus realizing the dynamic incremental scheduling of time-sensitive traffic in TSN.展开更多
The distributed flexible job shop scheduling problem(DFJSP)has attracted great attention with the growth of the global manufacturing industry.General DFJSP research only considers machine constraints and ignores worke...The distributed flexible job shop scheduling problem(DFJSP)has attracted great attention with the growth of the global manufacturing industry.General DFJSP research only considers machine constraints and ignores worker constraints.As one critical factor of production,effective utilization of worker resources can increase productivity.Meanwhile,energy consumption is a growing concern due to the increasingly serious environmental issues.Therefore,the distributed flexible job shop scheduling problem with dual resource constraints(DFJSP-DRC)for minimizing makespan and total energy consumption is studied in this paper.To solve the problem,we present a multi-objective mathematical model for DFJSP-DRC and propose a Q-learning-based multi-objective grey wolf optimizer(Q-MOGWO).In Q-MOGWO,high-quality initial solutions are generated by a hybrid initialization strategy,and an improved active decoding strategy is designed to obtain the scheduling schemes.To further enhance the local search capability and expand the solution space,two wolf predation strategies and three critical factory neighborhood structures based on Q-learning are proposed.These strategies and structures enable Q-MOGWO to explore the solution space more efficiently and thus find better Pareto solutions.The effectiveness of Q-MOGWO in addressing DFJSP-DRC is verified through comparison with four algorithms using 45 instances.The results reveal that Q-MOGWO outperforms comparison algorithms in terms of solution quality.展开更多
In recent years, target tracking has been considered one of the most important applications of wireless sensornetwork (WSN). Optimizing target tracking performance and prolonging network lifetime are two equally criti...In recent years, target tracking has been considered one of the most important applications of wireless sensornetwork (WSN). Optimizing target tracking performance and prolonging network lifetime are two equally criticalobjectives in this scenario. The existing mechanisms still have weaknesses in balancing the two demands. Theproposed heuristic multi-node collaborative scheduling mechanism (HMNCS) comprises cluster head (CH)election, pre-selection, and task set selectionmechanisms, where the latter two kinds of selections forma two-layerselection mechanism. The CH election innovatively introduces the movement trend of the target and establishesa scoring mechanism to determine the optimal CH, which can delay the CH rotation and thus reduce energyconsumption. The pre-selection mechanism adaptively filters out suitable nodes as the candidate task set to applyfor tracking tasks, which can reduce the application consumption and the overhead of the following task setselection. Finally, the task node selection is mathematically transformed into an optimization problem and thegenetic algorithm is adopted to form a final task set in the task set selection mechanism. Simulation results showthat HMNCS outperforms other compared mechanisms in the tracking accuracy and the network lifetime.展开更多
基金The National Natural Science Foundation of China (No.70971022,71271054)the Scientific Research Innovation Project for College Graduates in Jiangsu Province(No.CXLX_0157)the Scientific Research Foundation of the Education Department of Anhui Province(No.2011sk123)
文摘A single-machine scheduling with preventive periodic maintenance activities in a remanufacturing system including resumable and non-resumable jobs is studied.The objective is to find a schedule to minimize the makespan and an LPT-LS algorithm is proposed.Non-resumable jobs are first scheduled in a machine by the longest processing time(LPT) rule,and then resumable jobs are scheduled by the list scheduling(LS) rule.And the worst-case ratios of this algorithm in three different cases in terms of the value of the total processing time of the resumable jobs(denoted as S2) are discussed.When S2 is longer than the spare time of the machine after the non-resumable jobs are assigned by the LPT rule,it is equal to 1.When S2 falls in between the spare time of the machine by the LPT rule and the optimal schedule rule,it is less than 2.When S2 is less than the spare time of the machine by the optimal schedule rule,it is less than 2.Finally,numerical examples are presented for verification.
文摘In this paper we consider a single-machine scheduling model with deteriorating jobs and simultaneous learning, and we introduce polynomial solutions for single machine makespan minimization, total flow times minimization and maximum lateness minimization corresponding to the first and second special cases of our model under some agreeable conditions. However, corresponding to the third special case of our model, we show that the optimal schedules may be different from those of the classical version for the above objective functions.
基金Sponsored by the National Natural Science Foundation of China(Grant No.70671040)and Specialized Research Fund for the Doctoral Program of High Education(Grant No.20050079008).
文摘In a CPM network, the longest path problem is one of the most important subjects. According to the intrinsic principle of CPM network, the length of the paths between arbitrary two nodes is presented. Furthermore, the length of the longest path from start node to arbitrary node and from arbitrary node to end node is proposed. In view of a scheduling problem of two activities with float in the CPM scheduling, we put forward Barycenter Theory and prove this theory based on the algorithm of the length of the longest path. By this theory, we know which activity should be done firstly. At last, we show our theory by an example.
文摘Motivated by industrial applications we study a single-machine scheduling problem in which all the jobs are mutu- ally independent and available at time zero.The machine processes the jobs sequentially and it is not idle if there is any job to be pro- cessed.The operation of each job cannot be interrupted.The machine cannot process more than one job at a time.A setup time is needed if the machine switches from one type of job to another.The objective is to find an optimal schedule with the minimal total jobs’completion time.While the sum of jobs’processing time is always a constant,the objective is to minimize the sum of setup times.Ant colony optimization(ACO)is a meta-heuristic that has recently been applied to scheduling problem.In this paper we propose an improved ACO-Branching Ant Colony with Dynamic Perturbation(DPBAC)algorithm for the single-machine schedul- ing problem.DPBAC improves traditional ACO in following aspects:introducing Branching Method to choose starting points;im- proving state transition rules;introducing Mutation Method to shorten tours;improving pheromone updating rules and introduc- ing Conditional Dynamic Perturbation Strategy.Computational results show that DPBAC algorithm is superior to the traditional ACO algorithm.
基金supported by the National Natural Science Foundation of China under Grant Nos.62202192,71801218,and 72101094.
文摘In a local search algorithm,one of its most important features is the definition of its neighborhood which is crucial to the algorithm's performance.In this paper,we present an analysis of neighborhood combination search for solv-ing the single-machine scheduling problem with sequence-dependent setup time with the objective of minimizing total weighted tardiness(SMSWT).First,We propose a new neighborhood structure named Block Swap(B1)which can be con-sidered as an extension of the previously widely used Block Move(B2)neighborhood,and a fast incremental evaluation technique to enhance its evaluation efficiency.Second,based on the Block Swap and Block Move neighborhoods,we present two kinds of neighborhood structures:neighborhood union(denoted by B1UB2)and token-ring search(denoted by B1→B2),both of which are combinations of B1 and B2.Third,we incorporate the neighborhood union and token-ring search into two representative metaheuristic algorithms:the Iterated Local Search Algorithm(ILSnew)and the Hybrid Evolutionary Algorithm(HEA_(new))to investigate the performance of the neighborhood union and token-ring search.Exten-sive experiments show the competitiveness of the token-ring search combination mechanism of the two neighborhoods.Tested on the 120 public benchmark instances,our HEA_(new)has a highly competitive performance in solution quality and computational time compared with both the exact algorithms and recent metaheuristics.We have also tested the HEA,new algorithm with the selected neighborhood combination search to deal with the 64 public benchmark instances of the single-machine scheduling problem with sequence-dependent setup time.HEAnew is able to match the optimal or the best known results for all the 64 instances.In particular,the computational time for reaching the best well-known results for five chal-lenging instances is reduced by at least 61.25%.
基金supported by the National Natural Science Foundation of China(Nos.12271295 and 12001313)the Provincial Natural Science Foundation of Shandong(No.ZR2022MA019).
文摘In this paper,we consider the single-machine scheduling with step-deteriorating jobs and rejection.Each job is either rejected by paying a rejection penalty,or accepted and processed on the single machine,and the actual processing time of each accepted job is a step function of its starting time and the common deteriorating date.The objective is to minimize the makespan of the accepted jobs plus the total penalty of the rejected jobs.For the case of common deteriorating penalty,we first show that the problem is NP-hard in the ordinary sense.Then we present two pseudo-polynomial algorithms and a 2-approximation algorithm.Furthermore,we propose a fully polynomial time approximation scheme.For the case of common normal processing time,we present two pseudo-polynomial time algorithms,a 2-approximation algorithm and a fully polynomial time approximation scheme.
基金supported by the National Nature Science Foundation of China(NSFC)with grant Nos:72091215/72091210,71921001 and 72022018,and Youth Innovation Promotion Association(Grant No.2021454).
文摘Additive manufacturing(AM)has attracted significant attention in recent years based on its wide range of applications and growing demand.AM offers the advantages of production flexibility and design freedom.In this study,we considered a practical variant of the batch-processing-machine(BPM)scheduling problem that arises in AM industries,where an AM machine can process multiple parts simultaneously,as long as the twodimensional rectangular packing constraint is not violated.Based on the set-partitioning formulation of our mixed-integer programming(MIP)model,a branch-and-price(B&P)algorithm was developed by embedding a column-generation technique into a branchand-bound framework.Additionally,a novel labelling algorithm was developed to accelerate the column-generation process.Ours is the first study to provide a B&P algorithm to solve the BPM scheduling problem in the AM industry.We tested the performance of our algorithm using a modern MIP solver(Gurobi)and real data from a 3D printing factory.The results demonstrate that for most instances tested,our algorithm produces results similar or identical to those of Gurobi with reasonable computation time and outperforms Gurobi in terms of solution quality and running time on some large instances.
基金supported by the Deanship of Scientific Research and Graduate Studies at King Khalid University under research grant number(R.G.P.2/93/45).
文摘Thedeployment of the Internet of Things(IoT)with smart sensors has facilitated the emergence of fog computing as an important technology for delivering services to smart environments such as campuses,smart cities,and smart transportation systems.Fog computing tackles a range of challenges,including processing,storage,bandwidth,latency,and reliability,by locally distributing secure information through end nodes.Consisting of endpoints,fog nodes,and back-end cloud infrastructure,it provides advanced capabilities beyond traditional cloud computing.In smart environments,particularly within smart city transportation systems,the abundance of devices and nodes poses significant challenges related to power consumption and system reliability.To address the challenges of latency,energy consumption,and fault tolerance in these environments,this paper proposes a latency-aware,faulttolerant framework for resource scheduling and data management,referred to as the FORD framework,for smart cities in fog environments.This framework is designed to meet the demands of time-sensitive applications,such as those in smart transportation systems.The FORD framework incorporates latency-aware resource scheduling to optimize task execution in smart city environments,leveraging resources from both fog and cloud environments.Through simulation-based executions,tasks are allocated to the nearest available nodes with minimum latency.In the event of execution failure,a fault-tolerantmechanism is employed to ensure the successful completion of tasks.Upon successful execution,data is efficiently stored in the cloud data center,ensuring data integrity and reliability within the smart city ecosystem.
基金supported in part by the National Key Research and Development Program of China under Grant No.2021YFF0901300in part by the National Natural Science Foundation of China under Grant Nos.62173076 and 72271048.
文摘The distributed permutation flow shop scheduling problem(DPFSP)has received increasing attention in recent years.The iterated greedy algorithm(IGA)serves as a powerful optimizer for addressing such a problem because of its straightforward,single-solution evolution framework.However,a potential draw-back of IGA is the lack of utilization of historical information,which could lead to an imbalance between exploration and exploitation,especially in large-scale DPFSPs.As a consequence,this paper develops an IGA with memory and learning mechanisms(MLIGA)to efficiently solve the DPFSP targeted at the mini-malmakespan.InMLIGA,we incorporate a memory mechanism to make a more informed selection of the initial solution at each stage of the search,by extending,reconstructing,and reinforcing the information from previous solutions.In addition,we design a twolayer cooperative reinforcement learning approach to intelligently determine the key parameters of IGA and the operations of the memory mechanism.Meanwhile,to ensure that the experience generated by each perturbation operator is fully learned and to reduce the prior parameters of MLIGA,a probability curve-based acceptance criterion is proposed by combining a cube root function with custom rules.At last,a discrete adaptive learning rate is employed to enhance the stability of the memory and learningmechanisms.Complete ablation experiments are utilized to verify the effectiveness of the memory mechanism,and the results show that this mechanism is capable of improving the performance of IGA to a large extent.Furthermore,through comparative experiments involving MLIGA and five state-of-the-art algorithms on 720 benchmarks,we have discovered that MLI-GA demonstrates significant potential for solving large-scale DPFSPs.This indicates that MLIGA is well-suited for real-world distributed flow shop scheduling.
基金supported by the Central Government Guides Local Science and Technology Development Fund Project(2023ZY0020)Key R&D and Achievement Transformation Project in InnerMongolia Autonomous Region(2022YFHH0019)+3 种基金the Fundamental Research Funds for Inner Mongolia University of Science&Technology(2022053)Natural Science Foundation of Inner Mongolia(2022LHQN05002)National Natural Science Foundation of China(52067018)Metallurgical Engineering First-Class Discipline Construction Project in Inner Mongolia University of Science and Technology,Control Science and Engineering Quality Improvement and Cultivation Discipline Project in Inner Mongolia University of Science and Technology。
文摘In this paper,a bilevel optimization model of an integrated energy operator(IEO)–load aggregator(LA)is constructed to address the coordinate optimization challenge of multiple stakeholder island integrated energy system(IIES).The upper level represents the integrated energy operator,and the lower level is the electricity-heatgas load aggregator.Owing to the benefit conflict between the upper and lower levels of the IIES,a dynamic pricing mechanism for coordinating the interests of the upper and lower levels is proposed,combined with factors such as the carbon emissions of the IIES,as well as the lower load interruption power.The price of selling energy can be dynamically adjusted to the lower LA in the mechanism,according to the information on carbon emissions and load interruption power.Mutual benefits and win-win situations are achieved between the upper and lower multistakeholders.Finally,CPLEX is used to iteratively solve the bilevel optimization model.The optimal solution is selected according to the joint optimal discrimination mechanism.Thesimulation results indicate that the sourceload coordinate operation can reduce the upper and lower operation costs.Using the proposed pricingmechanism,the carbon emissions and load interruption power of IEO-LA are reduced by 9.78%and 70.19%,respectively,and the capture power of the carbon capture equipment is improved by 36.24%.The validity of the proposed model and method is verified.
基金supported in part by the High-tech ship scientific research project of the Ministry of Industry and Information Technology of the People’s Republic of China,and the National Nature Science Foundation of China(Grant No.71671113)the Science and Technology Department of Shaanxi Province(No.2020GY-219)the Ministry of Education Collaborative Project of Production,Learning and Research(No.201901024016).
文摘Ship outfitting is a key process in shipbuilding.Efficient and high-quality ship outfitting is a top priority for modern shipyards.These activities are conducted at different stations of shipyards.The outfitting plan is one of the crucial issues in shipbuilding.In this paper,production scheduling and material ordering with endogenous uncertainty of the outfitting process are investigated.The uncertain factors in outfitting equipment production are usually decision-related,which leads to difficulties in addressing uncertainties in the outfitting production workshops before production is conducted according to plan.This uncertainty is regarded as endogenous uncertainty and can be treated as non-anticipativity constraints in the model.To address this problem,a stochastic two-stage programming model with endogenous uncertainty is established to optimize the outfitting job scheduling and raw material ordering process.A practical case of the shipyard of China Merchants Heavy Industry Co.,Ltd.is used to evaluate the performance of the proposed method.Satisfactory results are achieved at the lowest expected total cost as the complete kit rate of outfitting equipment is improved and emergency replenishment is reduced.
基金This research was supported by the National Natural Science Foundation of China(No.11671368)the National Natural Science Foundation of Henan Province(No.15IRTSTHN006)The first author was also supported by the National Natural Science Foundation of Shandong Province(NZR2017MA031).
文摘We study single-machine scheduling problems with a single maintenance activity(MA)of length p0 under three types of assumptions:(A)the MA is required in a fixed time interval[T−p0,T]with T≥p0 and the job processing is of preemptive and resumable;(B)the MA is required in a relaxed time interval[0,T]with T≥p0 and the job processing is of nonpreemptive;(C)the MA is required in a relaxed time interval[T0,T]with 0≤T0≤T−p0 and the job processing is of nonpreemptive.We show in this paper that,up to the time complexity for solving scheduling problems,assumptions(A)and(B)are equivalent,and moreover,if T−(T0+p0)is greater than or equal to the maximum processing time of all jobs,the assumption(C)is also equivalent to(A)and(B).As an application,we study the scheduling for minimizing the weighted number of tardy jobs under the above three assumptions,respectively,and present corresponding time-complexity results.
基金Supported by National Natural Science Foundation of China(Grant Nos.12071442,11971443,12271491)。
文摘The single-machine lot scheduling problem with splittable jobs to minimize the number of tardy jobs has been showed to be weakly NP-hard in the literature.In this paper,we show that a generalized version of this problem in which jobs have deadlines is strongly NP-hard,and also present the results of some related scheduling problems.
文摘Cloud computing provides a diverse and adaptable resource pool over the internet,allowing users to tap into various resources as needed.It has been seen as a robust solution to relevant challenges.A significant delay can hamper the performance of IoT-enabled cloud platforms.However,efficient task scheduling can lower the cloud infrastructure’s energy consumption,thus maximizing the service provider’s revenue by decreasing user job processing times.The proposed Modified Chimp-Whale Optimization Algorithm called Modified Chimp-Whale Optimization Algorithm(MCWOA),combines elements of the Chimp Optimization Algorithm(COA)and the Whale Optimization Algorithm(WOA).To enhance MCWOA’s identification precision,the Sobol sequence is used in the population initialization phase,ensuring an even distribution of the population across the solution space.Moreover,the traditional MCWOA’s local search capabilities are augmented by incorporating the whale optimization algorithm’s bubble-net hunting and random search mechanisms into MCWOA’s position-updating process.This study demonstrates the effectiveness of the proposed approach using a two-story rigid frame and a simply supported beam model.Simulated outcomes reveal that the new method outperforms the original MCWOA,especially in multi-damage detection scenarios.MCWOA excels in avoiding false positives and enhancing computational speed,making it an optimal choice for structural damage detection.The efficiency of the proposed MCWOA is assessed against metrics such as energy usage,computational expense,task duration,and delay.The simulated data indicates that the new MCWOA outpaces other methods across all metrics.The study also references the Whale Optimization Algorithm(WOA),Chimp Algorithm(CA),Ant Lion Optimizer(ALO),Genetic Algorithm(GA)and Grey Wolf Optimizer(GWO).
基金supported by the State Grid Jiangsu Electric Power Co.,Ltd.Technology Project(J2023035).
文摘To mitigate the impact of wind power volatility on power system scheduling,this paper adopts the wind-storage combined unit to improve the dispatchability of wind energy.And a three-level optimal scheduling and power allocation strategy is proposed for the system containing the wind-storage combined unit.The strategy takes smoothing power output as themain objectives.The first level is the wind-storage joint scheduling,and the second and third levels carry out the unit combination optimization of thermal power and the power allocation of wind power cluster(WPC),respectively,according to the scheduling power of WPC and ESS obtained from the first level.This can ensure the stability,economy and environmental friendliness of the whole power system.Based on the roles of peak shaving-valley filling and fluctuation smoothing of the energy storage system(ESS),this paper decides the charging and discharging intervals of ESS,so that the energy storage and wind power output can be further coordinated.Considering the prediction error and the output uncertainty of wind power,the planned scheduling output of wind farms(WFs)is first optimized on a long timescale,and then the rolling correction optimization of the scheduling output of WFs is carried out on a short timescale.Finally,the effectiveness of the proposed optimal scheduling and power allocation strategy is verified through case analysis.
基金supported by State Grid Shanxi Electric Power Company Science and Technology Project“Research on key technologies of carbon tracking and carbon evaluation for new power system”(Grant:520530230005)。
文摘With the introduction of the“dual carbon”goal and the continuous promotion of low-carbon development,the integrated energy system(IES)has gradually become an effective way to save energy and reduce emissions.This study proposes a low-carbon economic optimization scheduling model for an IES that considers carbon trading costs.With the goal of minimizing the total operating cost of the IES and considering the transferable and curtailable characteristics of the electric and thermal flexible loads,an optimal scheduling model of the IES that considers the cost of carbon trading and flexible loads on the user side was established.The role of flexible loads in improving the economy of an energy system was investigated using examples,and the rationality and effectiveness of the study were verified through a comparative analysis of different scenarios.The results showed that the total cost of the system in different scenarios was reduced by 18.04%,9.1%,3.35%,and 7.03%,respectively,whereas the total carbon emissions of the system were reduced by 65.28%,20.63%,3.85%,and 18.03%,respectively,when the carbon trading cost and demand-side flexible electric and thermal load responses were considered simultaneously.Flexible electrical and thermal loads did not have the same impact on the system performance.In the analyzed case,the total cost and carbon emissions of the system when only the flexible electrical load response was considered were lower than those when only the flexible thermal load response was taken into account.Photovoltaics have an excess of carbon trading credits and can profit from selling them,whereas other devices have an excess of carbon trading and need to buy carbon credits.
基金the State Grid Liaoning Electric Power Supply Co.,Ltd.(Research on Scheduling Decision Technology Based on Interactive Reinforcement Learning for Adapting High Proportion of New Energy,No.2023YF-49).
文摘Traditional optimal scheduling methods are limited to accurate physical models and parameter settings, which aredifficult to adapt to the uncertainty of source and load, and there are problems such as the inability to make dynamicdecisions continuously. This paper proposed a dynamic economic scheduling method for distribution networksbased on deep reinforcement learning. Firstly, the economic scheduling model of the new energy distributionnetwork is established considering the action characteristics of micro-gas turbines, and the dynamic schedulingmodel based on deep reinforcement learning is constructed for the new energy distribution network system with ahigh proportion of new energy, and the Markov decision process of the model is defined. Secondly, Second, for thechanging characteristics of source-load uncertainty, agents are trained interactively with the distributed networkin a data-driven manner. Then, through the proximal policy optimization algorithm, agents adaptively learn thescheduling strategy and realize the dynamic scheduling decision of the new energy distribution network system.Finally, the feasibility and superiority of the proposed method are verified by an improved IEEE 33-node simulationsystem.
基金supported by the Innovation Scientists and Technicians Troop Construction Projects of Henan Province(224000510002)。
文摘Time-Sensitive Network(TSN)with deterministic transmission capability is increasingly used in many emerging fields.It mainly guarantees the Quality of Service(QoS)of applications with strict requirements on time and security.One of the core features of TSN is traffic scheduling with bounded low delay in the network.However,traffic scheduling schemes in TSN are usually synthesized offline and lack dynamism.To implement incremental scheduling of newly arrived traffic in TSN,we propose a Dynamic Response Incremental Scheduling(DR-IS)method for time-sensitive traffic and deploy it on a software-defined time-sensitive network architecture.Under the premise of meeting the traffic scheduling requirements,we adopt two modes,traffic shift and traffic exchange,to dynamically adjust the time slot injection position of the traffic in the original scheme,and determine the sending offset time of the new timesensitive traffic to minimize the global traffic transmission jitter.The evaluation results show that DRIS method can effectively control the large increase of traffic transmission jitter in incremental scheduling without affecting the transmission delay,thus realizing the dynamic incremental scheduling of time-sensitive traffic in TSN.
基金supported by the Natural Science Foundation of Anhui Province(Grant Number 2208085MG181)the Science Research Project of Higher Education Institutions in Anhui Province,Philosophy and Social Sciences(Grant Number 2023AH051063)the Open Fund of Key Laboratory of Anhui Higher Education Institutes(Grant Number CS2021-ZD01).
文摘The distributed flexible job shop scheduling problem(DFJSP)has attracted great attention with the growth of the global manufacturing industry.General DFJSP research only considers machine constraints and ignores worker constraints.As one critical factor of production,effective utilization of worker resources can increase productivity.Meanwhile,energy consumption is a growing concern due to the increasingly serious environmental issues.Therefore,the distributed flexible job shop scheduling problem with dual resource constraints(DFJSP-DRC)for minimizing makespan and total energy consumption is studied in this paper.To solve the problem,we present a multi-objective mathematical model for DFJSP-DRC and propose a Q-learning-based multi-objective grey wolf optimizer(Q-MOGWO).In Q-MOGWO,high-quality initial solutions are generated by a hybrid initialization strategy,and an improved active decoding strategy is designed to obtain the scheduling schemes.To further enhance the local search capability and expand the solution space,two wolf predation strategies and three critical factory neighborhood structures based on Q-learning are proposed.These strategies and structures enable Q-MOGWO to explore the solution space more efficiently and thus find better Pareto solutions.The effectiveness of Q-MOGWO in addressing DFJSP-DRC is verified through comparison with four algorithms using 45 instances.The results reveal that Q-MOGWO outperforms comparison algorithms in terms of solution quality.
基金the Project Program of Science and Technology on Micro-System Laboratory,No.6142804220101.
文摘In recent years, target tracking has been considered one of the most important applications of wireless sensornetwork (WSN). Optimizing target tracking performance and prolonging network lifetime are two equally criticalobjectives in this scenario. The existing mechanisms still have weaknesses in balancing the two demands. Theproposed heuristic multi-node collaborative scheduling mechanism (HMNCS) comprises cluster head (CH)election, pre-selection, and task set selectionmechanisms, where the latter two kinds of selections forma two-layerselection mechanism. The CH election innovatively introduces the movement trend of the target and establishesa scoring mechanism to determine the optimal CH, which can delay the CH rotation and thus reduce energyconsumption. The pre-selection mechanism adaptively filters out suitable nodes as the candidate task set to applyfor tracking tasks, which can reduce the application consumption and the overhead of the following task setselection. Finally, the task node selection is mathematically transformed into an optimization problem and thegenetic algorithm is adopted to form a final task set in the task set selection mechanism. Simulation results showthat HMNCS outperforms other compared mechanisms in the tracking accuracy and the network lifetime.