SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(de Blanche Andreas 1975 ) "

Sökning: WFRF:(de Blanche Andreas 1975 )

  • Resultat 1-23 av 23
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • de Blanche, Andreas, 1975, et al. (författare)
  • Dual Core Efficiency for Engineering Simulation Applications
  • 2008
  • Ingår i: International Conference on Parallel and Distributed Processing Techniques and Applications, Las Vegas, NV, USA, 14-17 July 2008. ; , s. 888-894
  • Konferensbidrag (refereegranskat)abstract
    • With the event of multi-core processors the parallel execution of simulation applications has resulted in new problems and possibilities in resource usage in high performance computing (HPC). In this paper we have investigated the impact of execution of engineering applications utilizing one and two cores in an Intel Core 2 Duo based Linux cluster. In engineering industry the number of licenses puts practical and economical constraints on the maximum number of processes. Consequently the issue of how to distribute a given number of processes over the compute nodes in a HPC resource becomes very important. When distributing the application over multiple nodes we found that having N processes on N computer nodes, only using one core on each node, is significantly faster than running N processes on N cores in N/2 computer nodes. Only in one case out of 32 it was beneficial to use both cores. The “one compute node – one simulation process” approach gave an average cost efficiency increase of 16.5%, and for several sub-cases it is actually costbeneficial to run on more nodes than fewer, which decreases the overall run time.
  •  
2.
  • de Blanche, Andreas, 1975-, et al. (författare)
  • Multicore Clusters for CFD Simulations : Comparative Study of Three CFD-Softwares
  • 2012
  • Ingår i: PROCEEDINGS OFTHE 2012 INTERNATIONAL CONFERENCE ONPARALLEL AND DISTRIBUTED PROCESSING TECHNIQUES ANDAPPLICATIONS, PART II. - : CSREA Press. - 1601322275 - 1601322283 ; , s. 855-852
  • Konferensbidrag (refereegranskat)abstract
    • Multicore processors have come to stay, fulfill Moore’s law and might very well revolutionize the computer industry. However, we are now in a transitional period before the new programming models, numerical algorithms and general computer architecture have been developed and the software has been rewritten. This paper focuses on the effects multicore based systems have on industrial computational fluid dynamics (CFD) simulations. The most significant finding was that five of the models ran faster when only one process was executed on each multicore node instead of two. In these cases the execution time was increased by between 6.5% and 64% with a median increase of 10% when utilizing both cores.
  •  
3.
  • Namaki, Nima, 1975-, et al. (författare)
  • A Tool for Processor Dependency Characterization of HPC Applications
  • 2009
  • Ingår i: Proceedings for the HPC Asia & APAN 2009. - Hsinchu, Taiwan : National Center for High-Performance Computing. - 9789868522800 ; , s. 70-76
  • Konferensbidrag (refereegranskat)abstract
    • In this paper we have implemented and verified Cpugen, a tool for characterization of processor resource utilization of HPC applications .Toward this end we implemented Cpugen, an application with good accuracy for processor load generation. Cpugen was verified through three different phases of passive, active and real world application measurements. The measurement results show that our implemented method is a viable option for non-intrusive, stable and robust load generation. The error range for all generated target loads are between 0.00% minimum and 1.04% maximum, with a median deviation of 0.11%. We can conclude that the method utilized in this investigation provides the ability to generate stable and robust processor load.
  •  
4.
  • Namaki, Nima, 1975-, et al. (författare)
  • Exhausted Dominated Performance : Basic Proof of Concept
  • 2010
  • Ingår i: International conference on Parallel and Distributed Processing Techniques and Applications. - Las Vegas : CSREA. - 1601321562 - 1601321570 ; , s. 63-67
  • Konferensbidrag (refereegranskat)
  •  
5.
  • Namaki, Nima, 1975-, et al. (författare)
  • Exhaustion dominated performance : a first attempt
  • 2009
  • Ingår i: SAC '09. - New York, NY, USA : ACM. - 9781605581668 ; , s. 1011-1012
  • Konferensbidrag (refereegranskat)abstract
    • In this paper we present a first attempt to an analytical method to discover and understand how the available resources influence the execution time. Our method is based on a piecewise linear model for dominating execution limitations and black-box observations. We verify this analysis method by a set of real-world experiments. Finally, we conclude that the different effects follow a linear superposition within a certain range.
  •  
6.
  •  
7.
  • Andersson, H. Robert H., et al. (författare)
  • Flipping the Data Center Network : Increasing East-West Capacity Using Existing Hardware
  • 2017
  • Ingår i: 2017 IEEE 42nd Conference on Local Computer Networks (LCN), 9-12 Oct. 2017. - : IEEE. - 9781509065233 - 9781509065226 ; , s. 211-214
  • Konferensbidrag (refereegranskat)abstract
    • In today's datacenters, there is an increasing demand for more network traffic capacity. The majority of the increase in traffic is internal to the datacenter, i.e., it flows between different servers within the datacenter. This category of traffic is often referred to as east-west traffic and traditional hierarchical architectures are not well equipped to handle this type of traffic. Instead, they are better suited for the north-southbound traffic between hosts and the Internet. One suggested solution for this capacity problem is to adopt a folded CLOS topology, also known as spine-leaf, which often relies on software defined network (SDN) controllers to manage traffic. This paper shows that it is possible to implement a spine-leaf network using commodity-ofthe-shelf switches and thus improve the east-west traffic capacity. This can be obtained using low complexity configuration and edgerouting for load balancing, eliminating the need for a centralized SDN controller.
  •  
8.
  • de Blanche, Andreas, 1975-, et al. (författare)
  • A methodology for estimating co-scheduling slowdowns due to memory bus contention on multicore nodes
  • 2014
  • Ingår i: Proceedings of the IASTED International Conference on Parallel and Distributed Computing and Networks, PDCN 2014. - : ACTA Press. - 9780889869677 - 9780889869653 ; , s. 216-223
  • Konferensbidrag (refereegranskat)abstract
    • When two or more programs are co-scheduled on the same multicore computer they might experience a slowdown due to the limited off-chip memory bandwidth. According to our measurements, this slowdown does not depend on the total bandwidth use in a simple way. One thing we observe is that a higher memory bandwidth usage will not always lead to a larger slowdown. This means that relying on bandwidth usage as input to a job scheduler might cause non-optimal scheduling of processes on multicore nodes in clusters, clouds, and grids. To guide scheduling decisions, we instead propose a slowdown based characterization approach. Real slowdowns are complex to measure due to the exponential number of experiments needed. Thus, we present a novel method for estimating the slowdown programs will experience when co-scheduled on the same computer. We evaluate the method by comparing the predictions made with real slowdown data and the often used memory bandwidth based method. This study show that a scheduler relying on slowdown based categorization makes fewer incorrect co-scheduling choices and the negative impact on program execution times is less than when using a bandwidth based categorization method.
  •  
9.
  • de Blanche, Andreas, 1975-, et al. (författare)
  • Addressing characterization methods for memory contention aware co-scheduling
  • 2015
  • Ingår i: Journal of Supercomputing. - : Springer Science and Business Media LLC. - 0920-8542 .- 1573-0484. ; 71:4, s. 1451-1483
  • Tidskriftsartikel (refereegranskat)abstract
    • The ability to precisely predict how memory contention degrades performance when co-scheduling programs is critical for reaching high performance levels in cluster, grid and cloud environments. In this paper we present an overview and compare the performance of state-of-the-art characterization methods for memory aware (co-)scheduling. We evaluate the prediction accuracy and co-scheduling performance of four methods: one slowdown-based, two cache-contention based and one based on memory bandwidth usage. Both our regression analysis and scheduling simulations find that the slowdown based method, represented by Memgen, performs better than the other methods. The linear correlation coefficient (Formula presented.) of Memgen's prediction is 0.890. Memgen's preferred schedules reached 99.53 % of the obtainable performance on average. Also, the memory bandwidth usage method performed almost as well as the slowdown based method. Furthermore, while most prior work promote characterization based on cache miss rate we found it to be on par with random scheduling of programs and highly unreliable.
  •  
10.
  • de Blanche, Andreas, 1975-, et al. (författare)
  • Artificial and human aspects of Industry 4.0: an industrial work-integrated-learning research agenda
  • 2021
  • Ingår i: VILÄR. - 9789189325036
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • The manufacturing industry is currently under extreme pressure to transform their organizations and competencies to reap the benefits of industry 4.0. The main driver for industry 4.0 is digitalization with disruptive technologies such as artificial intelligence, machine learning, internet of things, digital platforms, etc. Industrial applications and research studies have shown promising results, but they rarely involve a human-centric perspective. Given this, we argue there is a lack of knowledge on how disruptive technologies take part in human decision-making and learning practices, and to what extent disruptive technologies may support both employees and organizations to “learn”. In recent research the importance and need of including a human-centric perspective in industry 4.0 is raised including a human learning and decision-making approach. Hence, disruptive technologies, by themselves, no longer consider to solve the actual problems.Considering the richness of this topic, we propose an industrial work-integrated-learning research agenda to illuminate a human-centric perspective in Industry 4.0. This work-in-progress literature review aims to provide a research agenda on what and how application areas are covered in earlier research. Furthermore, the review identifies obstacles and opportunities that may affect manufacturing to reap the benefits of Industry 4.0. As part of the research, several inter-disciplinary areas are identified, in which industrial work-integrated-learning should be considered to enhance the design, implementation, and use of Industry 4.0 technologies. In conclusion, this study proposes a research agenda aimed at furthering research on how industrial digitalization can approach human and artificial intelligence through industrial work-integrated-learning for a future digitalized manufacturing.
  •  
11.
  • de Blanche, Andreas, 1975-, et al. (författare)
  • Availability of Unused Computational Resources in an Ordinary Office Environment
  • 2010
  • Ingår i: Journal of Circuits, Systems and Computers. - 0218-1266. ; 19:3, s. 557-572
  • Tidskriftsartikel (refereegranskat)abstract
    • The study presented in this paper highlights an important issue that was subject for discussionsand research about a decade ago and now have gained new interest with the current advances ofgrid computing and desktop grids. New techniques are being invented on how to utilize desktopcomputers for computational tasks but no other study, to our knowledge, has explored theavailability of the said resources. The general assumption has been that there are resources andthat they are available. The study is based on a survey on the availability of resources in anordinary o±ce environment. The aim of the study was to determine if there are truly usableunder-utilized networked desktop computers available for non-desktop tasks during the off-hours. We found that in more than 96% of the cases the computers in the current investigationwas available for the formation of part-time (night and weekend) computer clusters. Finally wecompare the performance of a full time and a metamorphosic cluster, based on one hypotheticallinear scalable application and a real world welding simulation.
  •  
12.
  • de Blanche, Andreas, 1975-, et al. (författare)
  • Disallowing Same-program Co-schedules to Improve Efficiency in Quad-core Servers
  • 2017
  • Ingår i: Proceedings of the Joined Workshops COSH 2017 and VisorHPC 2017. - 9783000555640 ; , s. 1-7
  • Konferensbidrag (refereegranskat)abstract
    • Programs running on different cores in a multicore server are often forced to share resources like off-chip memory,caches, I/O devices, etc. This resource sharing often leads to degraded performance, a slowdown, for the program sthat share the resources. A job scheduler can improve performance by co-scheduling programs that use different resources on the same server. The most common approachto solve this co-scheduling problem has been to make job schedulers resource aware, finding ways to characterize and quantify a program’s resource usage. We have earlier suggested a simple, program and resource agnostic, scheme as a stepping stone to solving this problem: Avoid Terrible Twins, i.e., avoid co-schedules that contain several instances from the same program. This scheme showed promising results when applied to dual-core servers. In this paper, we extend the analysis and evaluation to also cover quad-core servers. We present a probabilistic model and empirical data that show that execution slowdowns get worse as the number of instances of the same program increases. Our scheduling simulations show that if all co-schedules containing multiple instances of the same program are removed, the average slowdown is decreased from 54% to 46% and that the worst case slowdown is decreased from 173% to 108%.
  •  
13.
  • de Blanche, Andreas, 1975-, et al. (författare)
  • Initial Formulation of Why Disallowing Same Program Co-schedules Improves Performance
  • 2017. - 1
  • Ingår i: Co-Scheduling of HPC Applications. - Netherlands : IOS Press. - 9781614997290 - 9781614997306 ; , s. 95-113
  • Bokkapitel (refereegranskat)abstract
    • Co-scheduling processes on different cores in the same server might leadto excessive slowdowns if they use the same shared resource, like a memory bus. Ifpossible, processes with a high shared resource use should be allocated to differentserver nodes to avoid contention, thus avoiding slowdown. This article proposesthe more general principle that twins, i.e. several instances of the same program,should be allocated to different server nodes. The rational for this is that instancesof the same program use the same resources and they are more likely to be eitherlow or high resource users. High resource users should obviously not be combined,but a bit non-intuitively, it is also shown that low resource users should also notbe combined in order to not miss out on better scheduling opportunities. This isverified using both a probabilistic argument as well as experimentally using tenprograms from the NAS parallel benchmark suite running on two different systems.By using the simple rule of forbidding these terrible twins, the average slowdownis shown to decrease from 6.6% down to 5.9% for System A and from 9.5% to8.3% for System B. Furthermore, the worst case slowdown is lowered from 12.7%to 9.0% and 19.5% to 13% for systems A and B, respectively. Thus, indicating aconsiderable improvement despite the rule being program agnostic and having noinformation about any program’s resource usage or slowdown behavior.
  •  
14.
  • de Blanche, Andreas, 1975-, et al. (författare)
  • Method for Experimental Measurement of an Applications Memory Bus Usage
  • 2010
  • Ingår i: <em> </em>. - : CSREA Press.
  • Konferensbidrag (refereegranskat)abstract
    • The disproportion between processor and memory bus capacities has increased constantly during the last decades. With the introduction of multi-core processors the memory bus capacity is divided between the simultaneously executing processes (cores). The memory bus capacity directly affects the number of applications that can be executed simultaneously at its full potential. Thus, against this backdrop it becomes important to estimate how the limitation of the memory bus effects the applications performance. Towards this end we introduce a method and a tool for experimental estimation of an applications memory requirement as well as the impact of sharing the memory bus has on the execution times. The tool enables black-box approximate profiling of an applications memory bus usage during execution. It executes entirely in user-space and does not require access to the application code, only the binary. 
  •  
15.
  • de Blanche, Andreas, 1975-, et al. (författare)
  • Minimizing Total Cost ($$) and Maximizing Throughput : A Metric for Node versus Core Usage in Multi-Core Clusters
  • 2010
  • Ingår i: Proceedings of the International conference on Parallel and Distributed Processing Techniques and Applications. - Las Vegas : CSREA Press. - 1601321562 - 1601321570 ; , s. 241-248
  • Konferensbidrag (refereegranskat)abstract
    • When most commercial clusters had one processor core each, decreasing the runtime meant executing the application over more nodes – the associated cost (in $) would scale linearly with the number of nodes. However with the recent advances of multi-core processors the execution time can be increased by utilizing more nodes or by utilizing more cores in the same nodes. In the industrial cluster environments a key question is how to run the applications, to minimize the total cost while maximizing the throughput and solution times of the individual jobs. The number of core used and their contribution to the total runtime reduction is especially interesting since companies often use commercial software that is licensed per year and process. The annual license cost of one single process is often far greater than that of a complete cluster node including maintenance and power. In this paper we present a metric for the calculation of the optimal way to run an application on a cluster consisting of multi-core nodes in order to minimize the cost of executing the said job. 
  •  
16.
  • de Blanche, Andreas, 1975-, et al. (författare)
  • Node Sharing for Increased Throughput and Shorter Runtimes : an Industrial Co-Scheduling Case Study
  • 2018
  • Ingår i: Proceedings of the 3rd Workshop on Co-Scheduling of HPC Applications (COSH 2018). ; , s. 15-20
  • Konferensbidrag (refereegranskat)abstract
    • The allocation of jobs to nodes and cores in industrial clusters is often based on queue-system standard settings, guesses or perceived fairness between different users and projects. Unfortunately, hard empirical data is often lacking and jobs are scheduled and co-scheduled for no apparent reason. In this case-study, we evaluate the performance impact of co-scheduling jobs using three types of applications and an existing 450+ node cluster at a company doing large-scale parallel industrial simulations. We measure the speedup when co-scheduling two applications together, sharing two nodes, compared to running the applications on separate nodes. Our results and analyses show that by enabling co-scheduling we improve performance in the order of 20% both in throughput and in execution times, and improve the execution times even more if the cluster is running with low utilization. We also find that a simple reconfiguration of the number of threads used in one of the applications can lead to a performance increase of 35-48% showing that there is a potentially large performance increase to gain by changing current practice in industry.
  •  
17.
  •  
18.
  • Hattinger, Monika, 1969-, et al. (författare)
  • Reviewing human-centric themes in intelligent manufacturing research
  • 2022
  • Ingår i: International Conference on Work Integrated Learning. - Trollhättan : University West. - 9789189325302 ; , s. 125-127
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • In the era of Industry 4.0, emergent digital technologies generate profound transformations in the industry toward developing intelligent manufacturing. The technologies included in Industry 4.0 are expected to bring new perspectives to the industry on how manufacturing can integrate new solutions to get maximum output with minimum resource utilization (Kamble et al., 2018). Industry 4.0 technologies create a great impact on production systems and processes, however, affect organizational structures and working life conditions by disrupting employees’ everyday practices and knowledge, in which competence and learning, human interaction, and organizational structures are key. Hence, new digital solutions need to be integrated with work and learning to generate more holistic and sustainable businesses (Carlsson et al., 2021).The core Industry 4.0 technologies are built on cyber-physical systems (CPS), cloud computing, and the Internet of things (IoT) (Kagermann et al., 2013; Zhou et al., 2018). In recent years, an array of additional technologies has been developed further, such as artificial intelligence (AI), big data analytics, augmented and virtual reality (AR/VR), cyber security, robotics, and automation. Industry 4.0 aims to create a potential for faster delivery times, more efficient and automated processes, higher quality, and customized products (Zheng et al., 2021). Hence, the ongoing transformation through the technological shift of production in combination with market demands pushes the industry and its production process.Recent research has substantially contributed to an increased understanding of the technological aspects of Industry 4.0. However, the utilization of technologies is only a part of the complex puzzle making up Industry 4.0 (Kagermann et al., 2013; Zheng et al., 2021). The impact Industry 4.0 technologies and application s have on the industrial context also changes and disrupts existing and traditional work practices (Taylor et al., 2020), management and leadership (Saucedo-Martínez et al., 2018), learning and skills (Tvenge & Martinsen, 2018), and education (Das et al., 2020). This research has shown a growing interest in human-centric aspects of Industry 4.0 (Nahavandi, 2019), i.e., the transformative effects Industry 4.0 has on humans, workplace design, organizational routines, skills, learning, etc. However, these aspects are scarcely considered in-depth. Given this, and from a holistic point of view, there is a need to understand intelligent manufacturing practice from a human-centric perspective, where issues of work practices and learning are integrated, herein refe rred to as industrial work-integrated learning. I-WIL is a research area that particularly pays attention to knowledge production and learning capabilities related to use and development when technology and humans co -exist in industrial work settings (Shahlaei & Lundh Snis, 2022). Even if Industry 4.0 still is relevant for continuous development, a complementary Industry 5.0 has arisen to provide efficiency and productivity as the sole goals to reinforce a sustainable, human-centric, and resilient manufacturing industry (Breque et al., 2021; Nahavandi, 2019).Given this situation, the research question addressed here is: How does state-of-the-art research of Industry 4.0 technologies and applications consider human-centric aspects? A systematic literature review was conducted aiming to identify a future research agenda that emphasizes human-centric aspects of intelligent manufacturing, that will contribute to the field of manufacturing research and practices. This question was based on very few systematic literature reviews, considering Industry 4.0 research incorporating human -centric aspects for developing intelligent manufacturing (Kamble et al., 2018; Zheng et al., 2021). The literature review study was structured by the design of Xiao and Watson’s (2019) methodology consisting of the steps 1) Initial corpus creation, 2) Finalizing corpus, and 3) Analyzing corpus, and we also used a bibliometric approach throughout the search process (Glänzel & Schoepflin, 1999). The keyword selection was categorized into three groups of search terms, “industry 4.0”, “manufacturing”, and “artificial intelligence”, see figure 1. (Not included here)Articles were collected from the meta -databases EBSCOhost, Scopus, Eric, and the database AIS, to quantify the presence of human-centric or human-involved AI approaches in recent manufacturing research. A total of 999 scientific articles were collected and clustered based on a list of application areas to investigate if there is a difference between various areas in which artificial intelligence is used. The application areas are decision -making, digital twin, flexible automation, platformization, predictive maintenance, predictive quality, process optimization, production planning, and quality assessment.Throughout the review process, only articles that included both AI and human -centric aspects were screened and categorized. The final corpus included 386 articles of which only 93 articles were identified as human -centric. These articles were categorized into three themes: 1) organizational change, 2) competence and learning, and 3) human-automation interaction. Theme 1 articles related mostly to the application areas of flexible automation (11), production planning (9), and predictive maintenance (5). Theme 2 concerned the application areas of production planning and quality assessment (7), and process optimization (7).Finally, theme 3 mainly focused on flexible automation (10), digital twin (3), and platformization (3). The rest of the corpus only consisted of one or two articles in related application areas. To conclude, only a few articles were found that reinforce human -centric themes for Industry 4.0 implementations. The literature review identified obstacles and opportu nities that affect manufacturing organizations to reap the benefits of Industry 4.0. Hence, I-WIL is proposed as a research area to inform a new research agenda that captures human and technological integration of Industry 4.0 and to further illuminate human-centric aspects and themes for future sustainable intelligent manufacturing. 
  •  
19.
  • Loconsole, Annabella, et al. (författare)
  • Comparing the CDIO educational framework with University West’s WIL certification: do they complement each other?
  • 2021
  • Ingår i: VILÄR. - 9789189325036 ; , s. 15-16
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • Higher education institutions (HEIs) need to continuously improve their quality to prepare the students to the society of the 21st Century. They need to develop efficient ways of collaborating with various partners in the surrounding community. Close ties with business and industry, and diversity among staff and students are necessary, especially within engineering education. An engineering degree should prepare students to develop a wide range of knowledge and skills. These range from technical, scientific, and mathematical knowledge but also soft skills such as teamwork, business skills and critical analysis, which are also central sustainability competences. It is vital that learning for engineers takes place in the context of authentic engineering problems and processes to develop these skills and to put theory into practice. Several initiatives focused on incorporating these skills in higher education exists. CDIO (Conceive, Design, Implement, Operate) is one of the most prominent initiatives within engineering education. CDIO targets the typical tasks an engineer performs when bringing new systems, products and services to the market or the society. The CDIO initiative was created to strengthen active and problem-based learning and improving students' communication and professional skills. CDIO focus on improving practical and work-related skills to better prepare engineering students for their future professional life.University West employs another initiative, Arbetsintegrerat lärande (AIL), which “roughly” translates to Work Integrated Learning (WIL). WIL shares much of the same philosophy as CDIO. All programs at University West are currently undergoing an AIL-certification process. For engineering programs, that have been working with CDIO, it is interesting to compare them. It is currently unclear how they differ. In this study we compare the CDIO educational framework with the WIL-certification through a series of workshops to identify in which areas they overlap and which areas they differ. Would a program that has adopted the CDIO educational framework automatically fulfill the WIL-certification? 
  •  
20.
  • Lundh Snis, Ulrika, 1970-, et al. (författare)
  • Artificial and Human Intelligence through Learning : How Industry Applications Need Human-in-the-loop
  • 2020
  • Ingår i: VILÄR. - Trollhättan : Högskolan Väst. - 9789188847867 ; , s. 24-26
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • This study addresses work-integrated learning from a workplace learning perspective.Two companies within the manufacturing industry (turbo machinery and aerospace) together with a multi-disciplinary research group explore the opportunities and challenges related to applications of artificial intelligence and human intelligence and how such applications can integrate and support learning at the workplace.The manufacturing industry is currently under extreme pressure to transform their organizations and competencies to reap the benefits of industry 4.0. The main driverf or industry 4.0 is digitalization with disruptive technologies such as artificial intelligence, internet of things, machine learning, cyber-physical systems, digital platforms, etc. Many significant studies have highlighted the importance of human competence and learning in connection to industry 4.0 in general and disruptive technologies and its transformative consequences in particular. What impact have such technologies on employees and their workplace?There is a lack of knowledge on how artificial intelligent systems actually take part in practices of human decision making and learning and to what extent disruptive technology may support both employees and organizations to “learn”. The design  and use of three real-world cases of artificial intelligence applications (as instances of industry 4.0 initiatives) will form the basis of how to support human decision making and scale up for strategic action and learning. Following a work-integratedapproach the overall research question has been formulated together with the two industry partners: How can artificial and human intelligence and learning, interact tobring manufacturing companies into Industry 4.0? An action-oriented research approach with in-depth qualitative and quantitative methods will be used in order to make sense and learn about new applications and data set related to a digitalized production.The contribution of this study will be three lessons learned along with a generic model for learning and organizing in the context of industry 4.0 initiatives. Tentative findings concern how artificial and human intelligence can be smartly integrated into the human work organization, i.e. the workplace. Many iterations of integrating the two intelligences are required. We will discuss a preliminary process-model called “Super8”, in which AI systems must allow for providing feedback on progress as well as being able to incorporate high-level human input in the learning process. The   practical implication of the study will be industrialized in the collaborating 
  •  
21.
  • Lundmark, Elias, et al. (författare)
  • Increasing Throughput of Multiprogram HPC Workloads : Evaluating a SMT Co-Scheduling Approach
  • 2017
  • Konferensbidrag (refereegranskat)abstract
    • Simultaneous Multithreading (SMT) is a technique that allows formore efficient processor utilization by scheduling multiple threadson a single physical core. Previous research have shown an averagethroughput increase of around 20% with an SMT level of two, e.g.two threads per core. However, a bad combination of threads canactually result in decreased performance. To be conservative, manyHPC-systems have SMT disabled, thus, limiting the number ofscheduling slots in the system to one per core. However, for SMT tonot hurt performance, we need to determine which threads shouldshare a core. In this poster, we use 30 random SPEC CPU job mixedon a twelve-core Broadwell based node, to study the impact ofenabling SMT using two different co-scheduling strategies. Theresults show that SMT can increase performance especially whenusing no-same-program co-scheduling.
  •  
22.
  • Lundqvist, Thomas, 1967-, et al. (författare)
  • Comparing the Cdio Standards with the Work Integrated Learning Certification
  • 2022
  • Ingår i: Proceedings of the International CDIO Conference 18th International CDIO Conference, CDIO 2022 Reykjavik. - : Chalmers University of Technology. - 9789935965561 ; , s. 37-47
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • Improving the quality of higher education is an important responsibility of universities and colleges. Several approaches have been developed with the goal of improving the quality of university study programs. In this paper we compare the CDIO (Conceive, Design, Implement, Operate) and the work-integrated learning (WIL) initiatives based on recently completed WIL certifications at University West. Through a series of workshops, the CDIO standards are compared with the aspects and criteria of the WIL certification guidelines, to identify overlapping areas and differences between the two initiatives. The results show that both initiatives overlap but also differ in several aspects. These differences could be useful to strengthen the WIL certification process at University West as well as clarifying the connection between CDIO and work-integrated learning.  
  •  
23.
  • Lundqvist, Thomas, 1967-, et al. (författare)
  • Thing-to-thing electricity micro payments using blockchain technology
  • 2017
  • Ingår i: Global Internet of Things Summit (GIoTS), 2017. - : Institute of Electrical and Electronics Engineers (IEEE). - 9781509058730 ; , s. 261-266
  • Konferensbidrag (refereegranskat)abstract
    • Thing-to-thing payments are a key enabler in the Internet of Things (IoT) era, to ubiquitously allow for devices to pay each other for services without any human interaction. Traditional credit card-based systems are not able to handle this new paradigm, however blockchain technology is a promising payment candidate in this context. The prominent example of blockchain technology is Bitcoin, with its decentralized structure and ease of account creation. This paper presents a proof-of-concept implementation of a smart cable that connects to a smart socket and without any human interaction pays for electricity. In this paper, we identify several obstacles for the widespread use of bitcoins in thing-to-thing payments. A critical problem is the high transaction fees in the Bitcoin network when doing micro transactions. To reduce this impact, we present a single-fee micro-payment protocol that aggregates multiple smaller payments incrementally into one larger transaction needing only one transaction fee. The proof-of concept shows that trustless, autonomous, and ubiquitous thing-to-thing micro-payments is no longer a future technology.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-23 av 23

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy