SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "L773:0254 5330 OR L773:1572 9338 "

Search: L773:0254 5330 OR L773:1572 9338

  • Result 1-50 of 71
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Aas, Erik (author)
  • Limit points of the iterative scaling procedure
  • 2014
  • In: Annals of Operations Research. - : Springer Science and Business Media LLC. - 0254-5330 .- 1572-9338. ; 215:1, s. 15-23
  • Journal article (peer-reviewed)abstract
    • The iterative scaling procedure (ISP) is an algorithm which computes a sequence of matrices, starting from some given matrix. The objective is to find a matrix 'proportional' to the given matrix, having given row and column sums. In many cases, for example if the initial matrix is strictly positive, the sequence is convergent. It is known that the sequence has at most two limit points. When these are distinct, convergence to these two points can be slow. We give an efficient algorithm which finds the limit points, invoking the ISP only on subproblems for which the procedure is convergent.
  •  
2.
  • Agrell, Per J., et al. (author)
  • Impacts on efficiency of merging the Swedish district courts
  • 2020
  • In: Annals of Operations Research. - : Springer. - 0254-5330 .- 1572-9338. ; 288:2, s. 653-679
  • Journal article (peer-reviewed)abstract
    • Judicial courts form a stringent example of public services using partially sticky inputs and outputs with heterogeneous quality. Notwithstanding, governments internationally are striving to improve the efficiency of and diminish the budget spent on court systems. Frontier methods such as data envelopment analysis are sometimes used in investigations of structural changes in the form of mergers. This essay reviews the methods used to evaluate the ex post efficiency of horizontal mergers. Identification of impacts is difficult. Therefore, three analytical frameworks are applied: (1) a technical efficiency comparison over time, (2) a metafrontier approach among mergers and non-mergers, and (3) a conditional difference-in-differences approach where non-merged twins of the actual mergers are identified by matching. In addition, both time heterogeneity and sources of efficiency change are examined ex post. The method is applied to evaluate the impact on efficiency of merging the Swedish district courts from 95 to 48 between 2000 and 2009. Whereas the stated ambition for the mergers was to improve efficiency, no structured ex post analysis has been done. Swedish courts are shown to improve efficiency from merging. In addition to the particular application, this work may inform a more general discussion on public service efficiency measurement under structural changes, and their limits and potential.
  •  
3.
  • Akyildirim, Erdinc, et al. (author)
  • Forecasting mid-price movement of Bitcoin futures using machine learning
  • 2021
  • In: Annals of Operations Research. - : SPRINGER. - 0254-5330 .- 1572-9338.
  • Journal article (peer-reviewed)abstract
    • In the aftermath of the global financial crisis and ongoing COVID-19 pandemic, investors face challenges in understanding price dynamics across assets. This paper explores the performance of the various type of machine learning algorithms (MLAs) to predict mid-price movement for Bitcoin futures prices. We use high-frequency intraday data to evaluate the relative forecasting performances across various time frequencies, ranging between 5 and 60-min. Our findings show that the average classification accuracy for five out of the six MLAs is consistently above the 50% threshold, indicating that MLAs outperform benchmark models such as ARIMA and random walk in forecasting Bitcoin futures prices. This highlights the importance and relevance of MLAs to produce accurate forecasts for bitcoin futures prices during the COVID-19 turmoil.
  •  
4.
  •  
5.
  • Baltas, Konstantinos, et al. (author)
  • The role of resource orchestration in humanitarian operations : a COVID-19 case in the US healthcare
  • 2022
  • In: Annals of Operations Research. - : Springer. - 0254-5330 .- 1572-9338.
  • Journal article (peer-reviewed)abstract
    • This paper investigates the role of resource allocation in alleviating the impact on from disruptions in healthcare operations. We draw on resource orchestration theory and analyse data stemming from US healthcare to discuss how the US healthcare system structured, bundled and reconfigured resources (i.e. number of hospital beds, and vaccines) during the COVID-19 pandemic. Following a comprehensive and robust econometric analysis of two key resources (i.e. hospital beds and vaccines), we discuss its effect on the outcomes of the pandemic measured in terms of confirmed cases and deaths, and draw insights on how the learning curve effect and other factors might influence in the efficient and effective control of the pandemic outcomes through the resource usage. Our contribution lies in revealing how different resources are orchestrated (structured, bundled, and leveraged) to help planning responses to and dealing with the disruptions to create resilient humanitarian operations. Managerial implications, limitations and future research directions are also discussed.
  •  
6.
  • Beldiceanu, Nicolas, et al. (author)
  • New Filtering for the Cumulative Constraint in the Context of Non-Overlapping Rectangles
  • 2011. - 5
  • In: Annals of Operations Research. - : Springer-Verlag. - 0254-5330 .- 1572-9338. ; 1, s. 27-50
  • Journal article (peer-reviewed)abstract
    • This article describes new filtering methods for the Cumulative constraint. The first method introduces bounds for the so called longest cumulative hole problem and shows how to use these bounds in the context of the Non-Overlapping constraint. The second method introduces balancing knapsack constraints which relate the total height of the tasks that end at a specific time-point with the total height of the tasks that start at the same time-point. Experiments on tight rectangle packing problems show that these methods drastically reduce both the time and the number of backtracks for finding all solutions as well as for finding the first solution. For example, we found without backtracking all solutions to 65 perfect square instances of order 22-25 and sizes ranging from 192x192 to 661x661.
  •  
7.
  • Blomvall, Jörgen, 1974-, et al. (author)
  • Corporate Hedging: an answer to the "how" question
  • 2018
  • In: Annals of Operations Research. - New York, United States : Springer-Verlag New York. - 0254-5330 .- 1572-9338. ; 266:1-2, s. 35-69
  • Journal article (peer-reviewed)abstract
    • We develop a stochastic programming framework for hedging currency and interest rate risk, with market traded currency forward contracts and interest rate swaps, in an environment with uncertain cash flows. The framework captures the skewness and kurtosis in exchange rates, transaction costs, the systematic risks in interest rates, and most importantly, the term premia which determine the expected cost of different hedging instruments. Given three commonly used objective functions: variance, expected shortfall, and mean log profits, we study properties of the optimal hedge. We find that the choice of objective function can have a substantial effect on the resulting hedge in terms of the portfolio composition, the resulting risk and the hedging cost. Further, we find that unless the objective is indifferent to hedging costs, term premia in the different markets, along with transaction costs, are fundamental determinants of the optimal hedge. Our results also show that to reduce risk properly and to keep hedging costs low, a rich-enough universe of hedging instruments is critical. Through out-of-sample testing we validate the findings of the in-sample analysis, and importantly, we show that the model is robust enough to be used on real market data. The proposed framework offers great flexibility regarding the distributional assumptions of the underlying risk factors and the types of hedging instruments which can be included in the optimization model.
  •  
8.
  • Bodnar, Taras, et al. (author)
  • A closed-form solution of the multi-period portfolio choice problem for a quadratic utility function
  • 2015
  • In: Annals of Operations Research. - : Springer Science and Business Media LLC. - 0254-5330 .- 1572-9338. ; 229:1, s. 121-158
  • Journal article (peer-reviewed)abstract
    • In the present paper, we derive a closed-form solution of the multi-period portfolio choice problem for a quadratic utility function with and without a riskless asset. All results are derived under weak conditions on the asset returns. No assumption on the correlation structure between different time points is needed and no assumption on the distribution is imposed. All expressions are presented in terms of the conditional mean vectors and the conditional covariance matrices. If the multivariate process of the asset returns is independent, it is shown that in the case without a riskless asset the solution is presented as a sequence of optimal portfolio weights obtained by solving the single-period Markowitz optimization problem. The process dynamics are included only in the shape parameter of the utility function. If a riskless asset is present, then the multi-period optimal portfolio weights are proportional to the single-period solutions multiplied by time-varying constants which are dependent on the process dynamics. Remarkably, in the case of a portfolio selection with the tangency portfolio the multi-period solution coincides with the sequence of the single-period solutions. Finally, we compare the suggested strategies with existing multi-period portfolio allocation methods on real data.
  •  
9.
  • Bohlin, Markus, et al. (author)
  • Maintenance optimization with duration-dependent costs
  • 2015
  • In: Annals of Operations Research. - : Springer Science and Business Media LLC. - 0254-5330 .- 1572-9338. ; 224:1, s. 1-23
  • Journal article (peer-reviewed)abstract
    • High levels of availability and reliability are essential in many industries where production is subject to high costs due to downtime. Examples include the mechanical drive in natural gas pipelines and power generation on oil platforms, where gas turbines are commonly used as a power source. To mitigate the effects of service outages and increase overall reliability, it is also possible to use one or more redundant units serving as cold standby backup units. In this paper, we consider preventive maintenance optimization for parallel k-out-of-n multi-unit systems, where production at a reduced level is possible when some of the units are still operational. In such systems, there are both positive and negative effects of grouping activities together. The positive effects come from parallel execution of maintenance activities and shared setup costs, while the negative effects come from the limited number of units which can be maintained at the same time. To show the possible economic effects, we evaluate the approach on models of two production environments under a no-fault assumption. We conclude that savings were substantial in our experiments on preventive maintenance, compared to a traditional preventive maintenance plan. For single-unit systems, costs were on average 39 % lower when using optimization. For multi-unit systems, average savings were 19 %. We also used the optimization models to evaluate the effects of re-planning at breakdown and effects due to modeling of inclusion relations. Breakdown re-planning saved between 0 and 11 % of the maintenance costs, depending on which component failed, while inclusion relation modeling resulted in an 7 % average cost reduction.
  •  
10.
  • Borst, Sem, et al. (author)
  • Interacting queues with server selection and coordinated scheduling - Application to cellular data networks
  • 2009
  • In: Annals of Operations Research. - : Springer Science and Business Media LLC. - 0254-5330 .- 1572-9338. ; 170:1, s. 59-78
  • Journal article (peer-reviewed)abstract
    • We consider a system of parallel servers handling users of various classes, whose service rates depend not only on user classes, but also on the set of active servers. We investigate the stability under two types of allocation strategies: (i) server assignment where the users are assigned to servers based on rates, load, and other considerations, and (ii) coordinated scheduling where the activity states of servers are coordinated. We show how the model may be applied to evaluate the downlink capacity of wireless data networks. Specifically, we examine the potential gains in wireless capacity from the two types of resource allocation strategies.
  •  
11.
  • Broström, Peter, et al. (author)
  • Multiobjective design of survivable IP networks
  • 2006
  • In: Annals of Operations Research. - : Springer Science and Business Media LLC. - 0254-5330 .- 1572-9338. ; 147:1, s. 235-253
  • Journal article (peer-reviewed)abstract
    • Modern communication networks often use Internet Protocol routing and the intra-domain protocol OSPF (Open Shortest Path First). The routers in such a network calculate the shortest path to each destination and send the traffic on these paths, using load balancing. The issue of survivability, i.e. the question of how much traffic the network will be able to accommodate if components fail, is increasingly important. We consider the problem of designing a survivable IP network, which also requires determining the routing of the traffic. This is done by choosing the weights used for the shortest path calculations.
  •  
12.
  • Burdakov, Oleg, et al. (author)
  • A limited-memory multipoint symmetric secant method for bound constrained optimization
  • 2002
  • In: Annals of Operations Research. - 0254-5330 .- 1572-9338. ; 117:1-4, s. 51-70
  • Journal article (peer-reviewed)abstract
    • A new algorithm for solving smooth large-scale minimization problems with bound constraints is introduced. The way of dealing with active constraints is similar to the one used in some recently introduced quadratic solvers. A limited-memory multipoint symmetric secant method for approximating the Hessian is presented. Positive-definiteness of the Hessian approximation is not enforced. A combination of trust-region and conjugate-gradient approaches is used to explore a useful negative curvature information. Global convergence is proved for a general model algorithm. Results of numerical experiments are presented.
  •  
13.
  • Burdakov, Oleg, 1953-, et al. (author)
  • Optimal scheduling for replacing perimeter guarding unmanned aerial vehicles
  • 2017
  • In: Annals of Operations Research. - : Springer. - 0254-5330 .- 1572-9338. ; 249:1, s. 163-174
  • Journal article (peer-reviewed)abstract
    • Guarding the perimeter of an area in order to detect potential intruders is an important task in a variety of security-related applications. This task can in many circumstances be performed by a set of camera-equipped unmanned aerial vehicles (UAVs). Such UAVs will occasionally require refueling or recharging, in which case they must temporarily be replaced by other UAVs in order to maintain complete surveillance of the perimeter. In this paper we consider the problem of scheduling such replacements. We present optimal replacement strategies and justify their optimality.
  •  
14.
  • Carling, Kenneth, et al. (author)
  • Distance measure and the p-median problem in rural areas
  • 2015
  • In: Annals of Operations Research. - : Springer. - 0254-5330 .- 1572-9338. ; 226:1, s. 89-99
  • Journal article (peer-reviewed)abstract
    • The p-median model is used to locate P facilities to serve a geographically distributed population. Conventionally, it is assumed that the population patronize the nearest facility and that the distance between the resident and the facility may be measured by the Euclidean distance. Carling, Han, and Håkansson (2012) compared two network distances with the Euclidean in a rural region with a sparse, heterogeneous network and a non-symmetric distribution of the population. For a coarse network and P small, they found, in contrast to the literature, the Euclidean distance to be problematic. In this paper we extend their work by use of a refined network and study systematically the case when P is of varying size (1-100 facilities). We find that the network distance give as good a solution as the travel-time network. The Euclidean distance gives solutions some 4-10 per cent worse than the network distances, and the solutions tend to deteriorate with increasing P. Our conclusions extend to intra-urban location problems.
  •  
15.
  • Carling, Kenneth, et al. (author)
  • Does Euclidean distance work well when the p-median model is applied in rural areas?
  • 2012
  • In: Annals of Operations Research. - : Springer Science and Business Media LLC. - 0254-5330 .- 1572-9338. ; 201:1, s. 83-97
  • Journal article (peer-reviewed)abstract
    • The p-median model is used to locate P centers to serve a geographically distributed population. A cornerstone of such a model is the measure of distance between a service center and demand points, i.e. the location of the population (customers, pupils, patients, and so on). Evidence supports the current practice of using Euclidean distance. However, we find that the location of multiple hospitals in a rural region of Sweden with anon-symmetrically distributed population is quite sensitive to distance measure, and somewhat sensitive to spatial aggregation of demand points.
  •  
16.
  • Carlsson, Fredrik, et al. (author)
  • A conjugate-gradient based approach for approximate solutions of quadratic programs
  • In: Annals of Operations Research. - 0254-5330 .- 1572-9338.
  • Journal article (peer-reviewed)abstract
    • This paper deals with numerical behaviour and convergence properties of a recently presented column generation approach for optimization of so called step-and-shoot radiotherapy treatment plans. The approach and variants of it have been reported to be efficient in practice, finding near-optimal solutions by generating only a low number of columns. The impact of different restrictions on the columns in a column generation method is studied, and numerical results are given for quadratic programs corresponding to three patient cases. In particular, it is noted that with a bound on the two-norm of the columns, the method is equivalent to the conjugate-gradient method. Further, the above-mentioned column generation approach for radiotherapy is obtained by employing a restriction based on the infinity-norm and non-negativity. The column generation method has weak convergence properties if restricted to generating feasible step-and-shoot plans, with a "tailing-off" effect for the objective values. However, the numerical results demonstrate that, like the conjugate-gradient method, a rapid decrease of the objective value is obtained in the first few iterations. For the three patient cases, the restriction on the columns to generate feasible step-and-shoot plans has small effect on the numerical efficiency.
  •  
17.
  • Carlsson, Fredrik, et al. (author)
  • On column generation approaches for approximate solutions of quadratic programs in intensity-modulated radiation therapy
  • 2014
  • In: Annals of Operations Research. - : Springer Science and Business Media LLC. - 0254-5330 .- 1572-9338. ; 223:1, s. 471-481
  • Journal article (peer-reviewed)abstract
    • This paper deals with numerical behavior of a recently presented column generation approach for optimization of so called step-and-shoot radiotherapy treatment plans. The approach and variants of it have been reported to be efficient in practice, finding near-optimal solutions by generating only a low number of columns. The impact of different restrictions on the columns in a column generation method is studied, and numerical results are given for quadratic programs corresponding to three patient cases. In particular, it is noted that with a bound on the two-norm of the columns, the method is equivalent to the conjugate-gradient method. Further, the above-mentioned column generation approach for radiotherapy is obtained by employing a restriction based on the infinity-norm and non-negativity. The column generation method has weak convergence properties if restricted to generating feasible step-and-shoot plans, with a "tailing-off" effect for the objective values. However, the numerical results demonstrate that, like the conjugate-gradient method, a rapid decrease of the objective value is obtained in the first few iterations. For the three patient cases, the restriction on the columns to generate feasible step-and-shoot plans has small effect on the numerical efficiency.
  •  
18.
  • Carlsson, Fredrik, et al. (author)
  • Using eigenstructure of the Hessian to reduce the dimension of the intensity modulated radiation therapy optimization problem
  • 2006
  • In: Annals of Operations Research. - : Springer Science and Business Media LLC. - 0254-5330 .- 1572-9338. ; 148:1, s. 81-94
  • Journal article (peer-reviewed)abstract
    • Optimization is of vital importance when performing intensity modulated radiation therapy to treat cancer tumors. The optimization problem is typically large-scale with a nonlinear objective function and bounds on the variables, and we solve it using a quasi-Newton sequential quadratic programming method. This study investigates the effect on the optimal solution, and hence treatment outcome, when solving an approximate optimization problem of lower dimension. Through a spectral decompostion, eigenvectors and eigenvalues of an approximation to the Hessian are computed. An approximate optimization problem of reduced dimension is formulated by introducing eigenvector weights as optimization parameters, where only eigenvectors corresponding to large eigenvalues are included. The approach is evaluated on a clinical prostate case. Compared to bixel weight optimization, eigenvector weight optimization with few parameters results in faster initial decline in the objective function, but with inferior final solution. Another approach, which combines eigenvector weights and bixel weights as variables, gives lower final objective values than what bixel weight optimization does. However, this advantage comes at the expense of the pre-computational time for the spectral decomposition.
  •  
19.
  • Choi, Yongrok, et al. (author)
  • Optimizing enterprise risk management : a literature review and critical analysis of the work of Wu and Olson
  • 2016
  • In: Annals of Operations Research. - : Springer Science and Business Media LLC. - 0254-5330 .- 1572-9338. ; 237:1-2, s. 281-300
  • Journal article (peer-reviewed)abstract
    • Risks exist in all aspects of our lives. Using data in both Scopus and ISI Web of Science, this review paper identifies pioneer work and pioneer scholars in enterprise risk management (ERM). Being ranked the first based on the review data, Desheng Wu has been active in this area by serving as a good academic network manager on the global research network, His global efforts with diverse networking have enabled him to publish outstanding papers in the field of ERM. Therefore, this paper also conducts a literature review of his papers and critical analysis of the work of Wu and Olson, from the perspective of the ERM, to glean implications and suggestions for the optimization and customization of the ERM.
  •  
20.
  • Chronopoulos, Michail, et al. (author)
  • When is it better to wait for a new version? Optimal replacement of an emerging technology under uncertainty
  • 2015
  • In: Annals of Operations Research. - : Springer Science and Business Media LLC. - 0254-5330 .- 1572-9338. ; 235:1, s. 177-201
  • Journal article (peer-reviewed)abstract
    • Firms that use an emerging technology often face uncertainty in both the arrival of new versions and the revenue that may be earned from their deployment. Via a sequential decision-making framework, we determine the value of the investment opportunity and the optimal replacement rule under three different strategies: compulsive, laggard, and leapfrog. In the first one, a firm invests sequentially in every version that becomes available, whereas in the second and third ones, it first waits for a new version to arrive and then either invests in the older or the newer version, respectively. We show that, under a compulsive strategy, technological uncertainty has a non-monotonic impact on the optimal investment decision. In fact, uncertainty regarding the availability of future versions may actually hasten investment. By comparing the relative values of the three strategies, we find that, under a low output price the compulsive strategy always dominates, whereas, at a high output price, the incentive to wait for a new version and adopt either a leapfrog or a laggard strategy increases as the rate of innovation increases. By contrast, high price uncertainty mitigates this effect, thereby increasing the relative attraction of a compulsive strategy.
  •  
21.
  • Cong, Rong Gang, et al. (author)
  • Managing soil natural capital : a prudent strategy for adapting to future risks
  • 2017
  • In: Annals of Operations Research. - : Springer Science and Business Media LLC. - 0254-5330 .- 1572-9338. ; 255:1-2, s. 439-463
  • Journal article (peer-reviewed)abstract
    • Farmers are exposed to substantial weather and market related risks. Rational farmers seek to avoid large losses. Future climate change and energy price fluctuations therefore make adaptating to increased risks particularly important for them. Managing soil natural capital—the capacity of the soil to generate ecosystem services of benefit to farmers—has been proven to generate the double dividend: increasing farm profit and reducing associated risk. In this paper we explore whether managing soil natural capital has a third dividend: reducing the downside risk (increasing the positive skewness of profit). This we refer to as the prudence effect which can be viewed as an adaptation strategy for dealing with future uncertainties through more prudent management of soil natural capital. We do this by developing a dynamic stochastic portfolio model to optimize the stock of soil natural capital—as indicated by soil organic carbon (SOC) content—that considers the mean, variance and skewness of profits from arable farming. The SOC state variable can be managed by the farmer only indirectly through the spatial and temporal allocation of land use. We model four cash crops and a grass ley that generates no market return but replenishes SOC. We find that managing soil natural capital can, not only improve farm profit while reducing the risk, but also reduce the downside risk. Prudent adaptation to future risks should therefore consider the impact of current agricultural management practices on the stock of soil natural capital.
  •  
22.
  • Dam, Hai Huyen, et al. (author)
  • Spreading Code Design Using a Global Optimization Method
  • 2005
  • In: Annals of Operations Research. - : Springer. - 0254-5330 .- 1572-9338. ; 133:1-4, s. 249-264
  • Journal article (peer-reviewed)abstract
    • The performance of a code division multiple access system depends on the correlation properties of the employed spreading code. Low cross-correlation values between spreading sequences are desired to suppress multiple access interference and to improve bit error performance. An auto-correlation function with a distinct peak enables proper synchronization and suppresses intersymbol interference. However, these requirements contradict each other and a trade-off needs to be established. In this paper, a global two dimensional optimization method is proposed to minimize the out-of-phase average mean-square aperiodic auto-correlation with average mean-square aperiodic cross-correlation being allowed to lie within a fixed region. This approach is applied to design sets of complex spreading sequences. A design example is presented to illustrate the relation between various correlation characteristics. The correlations of the obtained sets are compared with correlations of other known sequences.
  •  
23.
  • Eklund, Patrik, 1958-, et al. (author)
  • A consensus model of political decision-making
  • 2008
  • In: Annals of Operations Research. - : Springer. - 0254-5330 .- 1572-9338. ; 158:1, s. 5-20
  • Journal article (other academic/artistic)abstract
    • In this paper, a model of political consensus is introduced. Parties try to reach consensus in forming a government. A government is defined as a pair consisting of a winning coalition and a policy supported by this coalition, where a policy consists of policies on given issues. A party evaluates all governments the party belongs to with respect to some criteria. We allow the criteria to be of unequal importance to a party. These criteria concern winning coalitions and policy issues. Parties may be advised to adjust their preferences, i.e., to change their evaluation concerning some government(s) or/and the importance of the criteria, in order to obtain a better political consensus.
  •  
24.
  • Engevall, Stefan, 1966-, et al. (author)
  • The traveling salesman game : An application of cost allocation in a gas and oil company
  • 1998
  • In: Annals of Operations Research. - : Springer-Verlag New York. - 0254-5330 .- 1572-9338. ; 82, s. 203-218
  • Journal article (peer-reviewed)abstract
    • In this article, a cost allocation problem that arises in a distribution planning situation atthe Logistics Department at Norsk Hydro Olje AB is studied. A specific tour is considered,for which the total distribution cost is to be divided among the customers that are visited.This problem is formulated as a traveling salesman game, and cost allocation methods basedon different concepts from cooperative game theory, such as the nucleolus, the Shapleyvalue and the t-value, are discussed. Additionally, a new concept is introduced: the demandnucleolus. Computational results for the Norsk Hydro case are presented and discussed.
  •  
25.
  • Eriksson, Ola (author)
  • Influence of temporal aggregation on strategic forest management under risk of wind damage
  • 2014
  • In: Annals of Operations Research. - : Springer Science and Business Media LLC. - 0254-5330 .- 1572-9338. ; 219, s. 397-314
  • Journal article (peer-reviewed)abstract
    • A key aspect when optimizing strategic and long-term forest management policies is the temporal aggregation utilizing time periods of a specific length. As the length of the time periods influence both the problem size and the possible interaction of the management policy with the state of the forest, it implicitly has a major influence on the feasibility of computing the optimal management policy and the quality of the resulting management policy. The objective of this study was twofold: (i) to evaluate the value of considering the risk of wind damage in large-scale strategic forestry management policies, (ii) to investigate the influence of the length of the time periods on the value of considering the risk of wind damage in the management policy. The analysis was executed utilizing a graph-based Markov decision process model capable of considering stochastic wind damage event, and a case study utilizing a forest estate consisting of 1200 ha of forestry, divided into 623 stands. Twenty-, ten-, and five-year-long time periods were utilized to evaluate the influence of the length of the time periods, while the value of considering the risk of wind damage in the management of the estate was evaluated by optimizing and evaluating long-term management policies recognizing and not recognizing the risk of wind damage. Results show that the value of considering the risk of wind damage was small for the whole estate. The expected net present value of the estate increased by ≤ 2% by managing the estate according to the risk of wind damage. Furthermore, while the length of the time periods had a small influence on the scale of the entire estate, it had a larger influence on the scale of a smaller subset of stands in the estate. For the whole estate, the value of considering the risk of wind damage varied with ≤ 1.5% depending on the length of the time periods.While for a selected subset of stands, the value of considering the risk of wind damage varied with ≤ 6.5% depending on the length of the time periods
  •  
26.
  • Eriksson, Ola, et al. (author)
  • Introducing cost-plus-loss analysis into a hierarchical forestry planning environment
  • 2014
  • In: Annals of Operations Research. - : Springer Science and Business Media LLC. - 0254-5330 .- 1572-9338. ; 219, s. 415-431
  • Journal article (peer-reviewed)abstract
    • Cost-plus-loss analysis of data for forestry planning has often been carried out for highly simplified planning situations. In this study, we suggest an advance in the cost-plus-loss methodology that aims to capture the hierarchical structure and iterative nature of planning by the large forest owner. The simulation system that is developed to simulate the planning process of the forest owner includes the tactical and operational levels of a continuous planning process. The system is characterized by annual re-planning of the tactical plan with a planning horizon of ten year and with the option to reassess data for selected stands before operational planning. Operational planning is done with a planning horizon of two years and the first year of the plan is considered to have been executed before moving the planning process one year forward. The annual cycle is repeated 10 times, simulating decisions made over a ten-year time horizon. The optimizing planning models of the system consider wood flow requirements, available harvest resources, seasonal variation of ground conditions and spatiality. The data used are evaluated according to standard procedures in cost-plus-loss analysis. Results from a test case indicate high decision losses when planning at both levels is based on the type of data prevalent in the stand databases of Swedish companies. The losses can be reduced substantially if higher-quality data are introduced before operational planning. In summary, the results indicate that the method makes it possible to analyze where in the planning process one needs better data and their value.
  •  
27.
  •  
28.
  • Gabteni, S., et al. (author)
  • Combining column generation and constraint programming to solve the tail assignment problem
  • 2009
  • In: Annals of Operations Research. - : Springer Science and Business Media LLC. - 1572-9338 .- 0254-5330. ; 171:1, s. 61-76
  • Journal article (peer-reviewed)abstract
    • Within the area of short term airline operational planning, Tail Assignment is the problem of assigning flight legs to individual identified aircraft while satisfying all operational constraints, and optimizing some objective function. In this article, we propose that Tail Assignment should be solved as part of both the short and the long term airline planning. We further present a hybrid column generation and constraint programming solution approach. This approach can be used to quickly produce solutions for operations management, and also to produce close-to-optimal solutions for long and mid term planning scenarios. We present computational results which illustrate the practical usefulness of the approach.
  •  
29.
  • Grahn, Sofia, 1970-, et al. (author)
  • Population Monotonic Allocation Schemes in Bankruptcy Games
  • 2001
  • In: Annals of Operations Research. - : Kluwer Academic Publishers. - 0254-5330 .- 1572-9338. ; 109:1-4, s. 315-327
  • Journal article (peer-reviewed)abstract
    • The USA Bankruptcy Code legislates the bankruptcy of firms. Any allocation mechanism that is legal according to the Bankruptcy Code is necessarily population monotonic. Bankruptcy rules yielding a population monotonic allocation scheme in the associated bankruptcy game are characterized by efficiency, reasonability (each claimant receives a nonnegative amount not exceeding his claim), and the thieve property. The thieve property for bankruptcy problems entails that if a claimant manages to escape with his claim, the amount allocated to each remaining claimant is not larger than his share in the original problem. Many bankruptcy rules studied in the literature are efficient, reasonable, self-consistent, and monotonic. Rules satisfying these axioms are shown to yield population monotonic allocation schemes.
  •  
30.
  • Hamdan, Sadeque, et al. (author)
  • On the binary formulation of air traffic flow management problems
  • 2023
  • In: Annals of Operations Research. - : Springer. - 0254-5330 .- 1572-9338. ; 321, s. 267-279
  • Journal article (peer-reviewed)abstract
    • We discuss a widely used air traffic flow management formulation. We show that this formulation can lead to a solution where air delays are assigned to flights during their take-off which is prohibited in practice. Although air delay is more expensive than ground delay, the model may assign air delay to a few flights during their take-off to save more on not having as much ground delay. We present a modified formulation and verify its functionality in avoiding incorrect solutions.
  •  
31.
  • Hammer, Peter L., et al. (author)
  • Maximum weight archipelago subgraph problem
  • 2014
  • In: Annals of Operations Research. - : Springer Science and Business Media LLC. - 0254-5330 .- 1572-9338. ; 217:1, s. 253-262
  • Journal article (peer-reviewed)abstract
    • This paper is devoted to a new problem of combinatorial optimization. The problem is called Maximum Weight Archipelago Subgraph Problem (MWASP). Archipelago is a signed graph such that the negative edges connect the components of the graph of the positive edges. The new problem is to find a subset of edges in a weighted signed graph such that (i) if the edges of the subset are deleted from the graph then the remaining graph is an archipelago; and (ii) the subset has minimal total weight among the subsets having property (i). The problem is NP-complete, however a polynomial algorithm is provided to obtain the maximal weight of an edge what is still necessary to delete. The problem MWAP is used to analyze the relation of the blue chips of the Dow Jones Index.
  •  
32.
  • Hasan, Md Bokhtiar, et al. (author)
  • Do commodity assets hedge uncertainties? What we learn from the recent turbulence period?
  • 2022
  • In: Annals of Operations Research. - : Springer. - 0254-5330 .- 1572-9338.
  • Journal article (peer-reviewed)abstract
    • This study analyses the impact of different uncertainties on commodity markets to assess commodity markets hedging or safe-haven properties. Using time-varying dynamic conditional correlation and wavelet-based Quantile-on-Quantile regression models, our findings show that, both before and during the COVID-19 crisis, soybeans and clean energy stocks offer strong safe-haven opportunities against cryptocurrency price uncertainty and geopolitical risks (GPR). Soybean markets weakly hedge cryptocurrency policy uncertainty, US economic policy uncertainty, and crude oil volatility. In addition, GSCI commodity and crude oil also offer a weak safe-haven property against cryptocurrency uncertainties and GPR. Consistent with earlier studies, our findings indicate that safe-haven traits can alter across frequencies and quantiles. Our findings have significant implications for investors and regulators in hedging and making proper decisions, respectively, under diverse uncertain circumstances.
  •  
33.
  • Holm, Åsa, et al. (author)
  • Heuristics for Integrated Optimization of Catheter Positioning and Dwell Time Distribution in Prostate HDR Brachytherapy
  • 2016
  • In: Annals of Operations Research. - : Springer. - 0254-5330 .- 1572-9338. ; 236:2, s. 319-339
  • Journal article (peer-reviewed)abstract
    • High dose-rate (HDR) brachytherapy is a kind of radiotherapy used to treat, among others, prostate cancer. When applied to prostate cancer a radioactive source is moved through catheters implanted into the prostate. For each patient a treatment plan is constructed that decide for example catheter placement and dwell time distribution, that is where to stop the radioactive source and for how long.Mathematical optimization methods has been used to find quality plans with respect to dwell time distribution, however few optimization approaches regarding catheter placement have been studied. In this article we present an integrated optimization model that optimize catheter placement and dwell time distribution simultaneously. Our results show that integrating the two decisions yields greatly improved plans, from 15% to 94% improvement.Since the presented model is computationally demanding to solve we also present three heuristics: tabu search, variable neighbourhood search and genetic algorithm. Of these variable neighbourhood search is clearly the best, outperforming a state-of-the-art optimization software (CPLEX) and the two other heuristics.
  •  
34.
  •  
35.
  • Horn, Matthias, et al. (author)
  • A* Search for Prize-Collecting Job Sequencing with One Common and Multiple Secondary Resources
  • 2021
  • In: Annals of Operations Research. - : SPRINGER. - 0254-5330 .- 1572-9338. ; 302, s. 477-505
  • Journal article (peer-reviewed)abstract
    • We consider a sequencing problem with time windows, in which a subset of a given set of jobs shall be scheduled. A scheduled job has to execute without preemption and during this time, the job needs both a common resource for a part of the execution as well as a secondary resource for the whole execution time. The common resource is shared by all jobs while a secondary resource is shared only by a subset of the jobs. Each job has one or more time windows and due to these, it is not possible to schedule all jobs. Instead, each job is associated with a prize and the task is to select a subset of jobs which yields a feasible schedule with a maximum sum of prizes. First, we argue that the problem is NP-hard. Then, we present an exact A* algorithm and derive different upper bounds for the total prize; these bounds are based on constraint and Lagrangian relaxations of a linear programming relaxation of a multidimensional knapsack problem. For comparison, a compact mixed integer programming (MIP) model and a constraint programming model are also presented. An extensive experimental evaluation on three types of problem instances shows that the A* algorithm outperforms the other approaches and is able to solve small to medium size instances with up to about 40 jobs to proven optimality. In cases where A* does not prove that an optimal solution is found, the obtained upper bounds are stronger than those of the MIP model.
  •  
36.
  • Ingebretsen Carlson, Jim (author)
  • A speedy auction using approximated bidders' preferences
  • 2020
  • In: Annals of Operations Research. - : Springer Science and Business Media LLC. - 0254-5330 .- 1572-9338. ; 288:1, s. 65-93
  • Journal article (peer-reviewed)abstract
    • This paper presents a combinatorial auction, which is of particular interest when short completion times are of importance. It is based on a method for approximating the bidders' preferences over two types of item when complementarity between the two may exist. The resulting approximated preference relation is shown to be complete and transitive at any given price vector. It is shown that an approximated Walrasian equilibrium always exists if all bidders either view the items as substitutes or complements. If the approximated preferences of the bidders comply with the gross substitutes condition, then the set of approximated Walrasian equilibrium prices forms a complete lattice. A process is proposed that is shown to always reach the smallest approximated Walrasian price vector. Simulation results suggest that the approximation procedure works well as the difference between the approximated and true minimal Walrasian prices is small.
  •  
37.
  • Jana, Rabin K., et al. (author)
  • COVID-19 news and the US equity market interactions: An inspection through econometric and machine learning lens
  • 2022
  • In: Annals of Operations Research. - : SPRINGER. - 0254-5330 .- 1572-9338.
  • Journal article (peer-reviewed)abstract
    • This study investigates the impact of COVID-19 on the US equity market during the first wave of Coronavirus using a wide range of econometric and machine learning approaches. To this end, we use both daily data related to the US equity market sectors and data about the COVID-19 news over January 1, 2020-March 20, 2020. Accordingly, we show that at an early stage of the outbreak, global COVID-19s fears have impacted the US equity market even differently across sectors. Further, we also find that, as the pandemic gradually intensified its footprint in the US, local fears manifested by daily infections emerged more powerfully compared to its global counterpart in impairing the short-term dynamics of US equity markets.
  •  
38.
  • Janson, Svante (author)
  • Asymptotic bias of some election methods
  • 2014
  • In: Annals of Operations Research. - : Springer Science and Business Media LLC. - 0254-5330 .- 1572-9338. ; 215:1, s. 89-136
  • Journal article (peer-reviewed)abstract
    • Consider an election where N seats are distributed among parties with proportions p (1),aEuro broken vertical bar,p (m) of the votes. We study, for the common divisor and quota methods, the asymptotic distribution, and in particular the mean, of the seat excess of a party, i.e. the difference between the number of seats given to the party and the (real) number Np (i) that yields exact proportionality. Our approach is to keep p (1),aEuro broken vertical bar,p (m) fixed and let N -> a, with N random in a suitable way. In particular, we give formulas showing the bias favouring large or small parties for the different election methods.
  •  
39.
  • Johansson, Lina, et al. (author)
  • Quantifying sustainable control of inventory systems with non-linear backorder costs
  • 2017
  • In: Annals of Operations Research. - : Springer Science and Business Media LLC. - 0254-5330 .- 1572-9338. ; 259:1-2, s. 217-239
  • Journal article (peer-reviewed)abstract
    • Traditionally, when optimizing base-stock levels in spare parts inventory systems, it is common to base the decisions either on a linear shortage cost or on a certain target fill rate. However, in many practical settings the shortage cost is a non-linear function of the customer waiting time. In particular, there may exist contracts between the spare parts provider and the customer, where the provider is obliged to pay a fixed penalty fee if the spare part is not delivered within a certain time window. We consider a two-echelon inventory system with one central warehouse and multiple local sites. Focusing on spare parts products, we assume continuous review base stock policies. We first consider a fixed backorder cost whenever a customer’s time in backorder exceeds a prescribed time limit, second a general non-linear backorder cost as a function of the customer waiting time, and third a time window service constraint. We show from a sustainability perspective how our model may be used for evaluating the expected (Formula presented.) emissions associated with not satisfying the customer demands on time. Finally, we generalize some known inventory models by deriving exact closed form expressions of inventory level distributions.
  •  
40.
  • Kalaivaani, P. C. D., et al. (author)
  • Advanced lightweight feature interaction in deep neural networks for improving the prediction in click through rate
  • 2021
  • In: Annals of Operations Research. - : Springer. - 0254-5330 .- 1572-9338.
  • Journal article (peer-reviewed)abstract
    • Online advertising has expanded to a hundred-dollar billion industry in recent years, with sales growing at faster rate in every year. Prediction of the click-through rate (CTR) is an important role in recommended systems and online ads. Click through rating (CTR) is the newest evolution in the advertising and marketing digital world. It is essential for any online advertising company in real time to display the appropriate ads to the right users in the correct context. A huge amount of research work proposed considers each ad separately and does not takes in the relationship with other ads that may have an impact on Click Through Rate. A Factorization machine, a more generalized predictor like support vector machines (SVM) is not able to estimate reliable parameters under sparsity. The main drawback is that the primary features and existing algorithms considers the large weighted parameters. KGCN (Knowledge graph-based convolution network) overcomes the drawback and works on alternating graphs which creates additional clustering and node comparison with high latency and performance. A new framework DeepLight Weight is proposed to resolve the high server latency and high usage of memory issues in online advertising. This work presents a framework to improve the CTR predictions with an objective to accelerate the model inference, prune redundant parameters and the dense embedding vectors. Field Weighed Factorization machine helps to organize the data features with high structure to improve the accuracy. For clearing latency issues, structural pruning makes the algorithm work with dense matrices by combining and executing the individual matrix values or neural nodes.
  •  
41.
  • Karlsson, Emil, et al. (author)
  • A matheuristic approach to large-scale avionic scheduling
  • 2021
  • In: Annals of Operations Research. - : SPRINGER. - 0254-5330 .- 1572-9338. ; 302, s. 425-459
  • Journal article (peer-reviewed)abstract
    • Pre-runtime scheduling of avionic systems is used to ensure that the systems provide the desired functionality at the correct time. This paper considers scheduling of an integrated modular avionic system which from a more general perspective can be seen as a multiprocessor scheduling problem that includes a communication network. The addressed system is practically relevant and the computational evaluations are made on large-scale instances developed together with the industrial partner Saab. A subset of the instances is made publicly available. Our contribution is a matheuristic for solving these large-scale instances and it is obtained by improving the model formulations used in a previously suggested constraint generation procedure and by including an adaptive large neighbourhood search to extend it into a matheuristic. Characteristics of our adaptive large neighbourhood search are that it is made over both discrete and continuous variables and that it needs to balance the search for feasibility and profitable objective value. The repair operation is to apply a mixed-integer programming solver on a model where most of the constraints are treated as soft and a violation of them is instead penalised in the objective function. The largest solved instance, with respect to the number of tasks, has 54,731 tasks and 2530 communication messages.
  •  
42.
  • Laksman, Efraim, 1983, et al. (author)
  • The stochastic opportunistic replacement problem, part III: improved bounding procedures
  • 2020
  • In: Annals of Operations Research. - : Springer Science and Business Media LLC. - 1572-9338 .- 0254-5330. ; 292:2, s. 711-733
  • Journal article (peer-reviewed)abstract
    • We consider the problem to find a schedule for component replacement in a multi-component system, whose components possess stochastic lives and economic dependencies, such that the expected costs for maintenance during a pre-defined time period are minimized. The problem was considered in Patriksson et al. (Ann Oper Res 224:51–75, 2015), in which a two-stage approximation of the problem was optimized through decomposition (denoted the optimization policy). The current paper improves the effectiveness of the decomposition approach by establishing a tighter bound on the value of the recourse function (i.e., the second stage in the approximation). A general lower bound on the expected maintenance cost is also established. Numerical experiments with 100 simulation scenarios for each of four test instances show that the tighter bound yields a decomposition generating fewer optimality cuts. They also illustrate the quality of the lower bound. Contrary to results presented earlier, an age-based policy performs on par with the optimization policy, although most simple policies perform worse than the optimization policy.
  •  
43.
  • Lindström, Erik (author)
  • Estimating parameters in diffusion processes using an approximate maximum likelihood approach
  • 2007
  • In: Annals of Operations Research. - : Springer Science and Business Media LLC. - 1572-9338 .- 0254-5330. ; 151:1, s. 269-288
  • Journal article (peer-reviewed)abstract
    • We present an approximate Maximum Likelihood estimator for univariate Ito stochastic differential equations driven by Brownian motion, based on numerical calculation of the likelihood function. The transition probability density of a stochastic differential equation is given by the Kolmogorov forward equation, known as the Fokker-Planck equation. This partial differential equation can only be solved analytically for a limited number of models, which is the reason for applying numerical methods based on higher order finite differences. The approximate likelihood converges to the true likelihood, both theoretically and in our simulations, implying that the estimator has many nice properties. The estimator is evaluated on simulated data from the Cox-Ingersoll-Ross model and a non-linear extension of the Chan-Karolyi-Longstaff-Sanders model. The estimates are similar to the Maximum Likelihood estimates when these can be calculated and converge to the true Maximum Likelihood estimates as the accuracy of the numerical scheme is increased. The estimator is also compared to two benchmarks; a simulation-based estimator and a Crank-Nicholson scheme applied to the Fokker-Planck equation, and the proposed estimator is still competitive.
  •  
44.
  • Linusson, Svante, et al. (author)
  • Dynamic adjustment : An electoral method for relaxed double proportionality
  • 2014
  • In: Annals of Operations Research. - : Springer Science and Business Media LLC. - 0254-5330 .- 1572-9338. ; 215:1, s. 183-199
  • Journal article (peer-reviewed)abstract
    • We describe an electoral system for distributing seats in a parliament. It gives proportionality for the political parties and close to proportionality for constituencies. The system suggested here is a version of the system used in Sweden and other Nordic countries with permanent seats in each constituency and adjustment seats to give proportionality on the national level. In the national election of 2010 the current Swedish system failed to give proportionality between parties. We examine here one possible cure for this unwanted behavior. The main difference compared to the current Swedish system is that the number of adjustment seats is not fixed, but rather dynamically determined to be as low as possible and still insure proportionality between parties.
  •  
45.
  • Lorén, Sara, 1970, et al. (author)
  • Maintenance for reliability - a case study
  • 2015
  • In: Annals of Operations Research. - : Springer Science and Business Media LLC. - 0254-5330 .- 1572-9338. ; 224:1, s. 111-119
  • Journal article (peer-reviewed)abstract
    • The optimal replacement problem for components with stochastic lives has an appealing solution based on the TTT-transform. The issue is revisited for components which are regularly inspected and where statistical uncertainties are taken into account by means of the method of predicted profile likelihood. The ideas are applied on crack growth data on a low pressure nozzle in a jet engine. It turns out that the standard method is not directly applicable and that the effect of uncertainties on the replacement times is not easy to predict.
  •  
46.
  • Ma, Ke, et al. (author)
  • Development of a central order processing system for optimizing demand-driven textile supply chains : a real case based simulation study
  • 2020
  • In: Annals of Operations Research. - : Springer. - 0254-5330 .- 1572-9338. ; 291:1-2, SI, s. 627-656
  • Journal article (peer-reviewed)abstract
    • Nowadays, the demand of small-series production and quick response become more and more important in textile supply chains. To meet the increasing trend of customization in garment production, forecast based supply chain model is not suitable any more. Demand-driven garment supply chain is developed and employed more and more. However, there are still many defects in current model for demand-driven supply chains, e.g. long lead time, low efficiency etc. Therefore, in this study we proposed a new collaborative model with central order processing system (COPS) to optimize current demand-driven garment supply chain and improve multiple supply chain performances. Common and important supply chain collaboration strategies, including resource sharing, information sharing, joint-decision making and profit sharing, were merged into this system. Discrete-event simulation technology was utilized to experiment and evaluate the new collaborative model under different conditions based on a real case in France. Multiple key performance indicators (KPIs) were examined for the whole supply chain and also for individual companies. Based on the simulation experiment results, we found that new proposed collaborative model gain improvements in all examined KPIs. New model with COPS performed better under high workload condition than under low workload condition. It can not only increase overall profit level of the whole supply chain but also individual profit level of each company.
  •  
47.
  •  
48.
  • Miettinen, Kaisa, 1965-, et al. (author)
  • Improving the Computational Efficiency in a Global Formulation (GLIDE) for Interactive Multiobjective Optimization
  • 2012
  • In: Annals of Operations Research. - : Springer. - 0254-5330 .- 1572-9338. ; 197:1, s. 47-70
  • Journal article (peer-reviewed)abstract
    • In this paper, we present a new general formulation for multiobjective optimization that can accommodate several interactive methods of different types (regarding various types of preference information required from the decision maker). This formulation provides a comfortable implementation framework for a general interactive system and allows the decision maker to conveniently apply several interactive methods in one solution process. In other words, the decision maker can at each iteration of the solution process choose how to give preference information to direct the interactive solution process, and the formulation enables changing the type of preferences, that is, the method used, whenever desired. The first general formulation, GLIDE, included eight interactive methods utilizing four types of preferences. Here we present an improved version where we pay special attention to the computational efficiency (especially significant for large and complex problems), by eliminating some constraints and parameters of the original formulation. To be more specific, we propose two new formulations, depending on whether the multiobjective optimization problem to be considered is differentiable or not. Some computational tests are reported showing improvements in all cases. The generality of the new improved formulations is supported by the fact that they can accommodate six interactive methods more, that is, a total of fourteen interactive methods, just by adjusting parameter values.
  •  
49.
  • Morreale, Azzurra, et al. (author)
  • Uncertain outcome presentations bias decisions : experimental evidence from Finland and Italy
  • 2018
  • In: Annals of Operations Research. - : Springer. - 0254-5330 .- 1572-9338. ; 268:1-2, s. 259-272
  • Journal article (peer-reviewed)abstract
    • Even in their everyday lives people are expected to make difficult decisions objectively and rationally, no matter how complex or uncertain the situation. In this research, we study how the format of presentation and the amount of presented information concerning risky events influence the decision-making process, and the propensity to take risk in decision makers. The results of an exploratory survey conducted in Finland and in Italy suggest that decision-making behavior changes according to the way the information is presented. We provide experimental evidence that different representations of expected outcomes create distinct cognitive biases and as a result affect the decisions made. This identified change in the perception of risk has, to the best of our knowledge, not been identified nor directly studied previously in the scientific literature. The paper thus presents novel insights into managerial decision-making that are potentially relevant for decision support theory, with implications to decision-makers and for information providers. Understanding the impact of various forms of presentation of risk is crucial in being able to convey information clearly and in a way that avoids misunderstandings. The implications of the results on being able to avoid opportunistic manipulation of decisions, are also of great concern in many application areas. Social networks are more and more frequently being used as a source of information and in this context it is crucial to acknowledge the effect that different ways of presenting and communicating risky outcomes may have on the behavior of the target group. Here presented results may, for example, be highly relevant for marketing and advertising that is conducted by using social media or social networks.
  •  
50.
  • Nagurney, Anna, et al. (author)
  • A supply chain network game theory model with product differentiation, outsourcing of production and distribution, and quality and price competition
  • 2015
  • In: Annals of Operations Research. - : Springer Science and Business Media LLC. - 0254-5330 .- 1572-9338. ; 226:1, s. 479-503
  • Journal article (peer-reviewed)abstract
    • In this paper, we develop a supply chain network game theory model with product differentiation, possible outsourcing of production and distribution, and quality and price competition. The original firms compete with one another in terms of in-house quality levels and product flows, whereas the contractors, aiming at maximizing their own profits, engage in competition for the outsourced production and distribution in terms of prices that they charge and their quality levels. The solution of the model provides each original firm with its optimal in-house quality level as well as its optimal in-house and outsourced production and shipment quantities that minimize the total cost and the weighted cost of disrepute, associated with lower quality levels and the impact on a firm’s reputation. The governing equilibrium conditions of the model are formulated as a variational inequality problem. An algorithm, which provides a discrete-time adjustment process and tracks the evolution of the product flows, quality levels, and prices over time, is proposed and convergence results given. Numerical examples are provided to illustrate how such a supply chain network game theory model can be applied in practice. The model is relevant to products ranging from pharmaceuticals to fast fashion to high technology products.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-50 of 71
Type of publication
journal article (71)
Type of content
peer-reviewed (68)
other academic/artistic (3)
Author/Editor
Strömberg, Ann-Brith ... (6)
Uddin, Gazi Salah (5)
Patriksson, Michael, ... (5)
Eriksson, Ola (3)
Carlsson, Fredrik (3)
Sorooshian, Shahryar ... (2)
show more...
Olsson, Fredrik (2)
Carling, Kenneth (2)
Wojciechowski, Adam, ... (2)
Rönnberg, Elina (2)
Aas, Erik (1)
Linusson, Svante (1)
Andersson, Tommy (1)
Ma, Ke (1)
Wallenius, J. (1)
Janson, Svante (1)
Eriksson, Kjell (1)
Carlsson Tedgren, Ås ... (1)
Claesson, Ingvar (1)
Beldiceanu, Nicolas (1)
Carlsson, Mats (1)
Puu, Tönu, 1936- (1)
Thomassey, Sebastien (1)
Agrell, Per J. (1)
Mattsson, Pontus, 19 ... (1)
Månsson, Jonas, 1964 ... (1)
Andersson, Christer (1)
Brady, Mark V. (1)
Kiani, NA (1)
Bekiros, Stelios (1)
Miettinen, Kaisa, 19 ... (1)
Holst, Anders (1)
Akyildirim, Erdinc (1)
Cepni, Oguzhan (1)
Corbet, Shaen (1)
Andersson Granberg, ... (1)
Proutiere, Alexandre (1)
Bodnar, Taras (1)
Mattsson, Lars-Göran (1)
Larsson, Torbjörn (1)
de Maré, Jacques, 19 ... (1)
Jayasekera, Ranadeva (1)
Rönnqvist, Mikael (1)
Värbrand, Peter, 195 ... (1)
Blennow, Kristina (1)
Palmqvist, Anders, 1 ... (1)
Burdakov, Oleg, 1953 ... (1)
Ruiz, Francisco (1)
Ingebretsen Carlson, ... (1)
Doherty, Patrick (1)
show less...
University
Linköping University (16)
University of Gothenburg (10)
Chalmers University of Technology (9)
Royal Institute of Technology (8)
Stockholm University (7)
Lund University (7)
show more...
Swedish University of Agricultural Sciences (5)
Umeå University (2)
Uppsala University (2)
University of Borås (2)
RISE (2)
Högskolan Dalarna (2)
Blekinge Institute of Technology (2)
Mälardalen University (1)
Jönköping University (1)
Stockholm School of Economics (1)
University of Skövde (1)
Linnaeus University (1)
Karolinska Institutet (1)
VTI - The Swedish National Road and Transport Research Institute (1)
show less...
Language
English (71)
Research subject (UKÄ/SCB)
Natural sciences (37)
Social Sciences (23)
Engineering and Technology (13)
Agricultural Sciences (5)

Year

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view