SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Kleyko Denis) "

Sökning: WFRF:(Kleyko Denis)

  • Resultat 1-10 av 74
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Abdukalikova, Anara, et al. (författare)
  • Detection of Atrial Fibrillation from Short ECGs : Minimalistic Complexity Analysis for Feature-Based Classifiers
  • 2018
  • Ingår i: Computing in Cardiology 2018. - : IEEE.
  • Konferensbidrag (refereegranskat)abstract
    • In order to facilitate data-driven solutions for early detection of atrial fibrillation (AF), the 2017 CinC conference challenge was devoted to automatic AF classification based on short ECG recordings. The proposed solutions concentrated on maximizing the classifiers F 1 score, whereas the complexity of the classifiers was not considered. However, we argue that this must be addressed as complexity places restrictions on the applicability of inexpensive devices for AF monitoring outside hospitals. Therefore, this study investigates the feasibility of complexity reduction by analyzing one of the solutions presented for the challenge.
  •  
2.
  • Alonso, Pedro, 1986-, et al. (författare)
  • HyperEmbed: Tradeoffs Between Resources and Performance in NLP Tasks with Hyperdimensional Computing Enabled Embedding of n-gram Statistics
  • 2021
  • Ingår i: 2021 International Joint Conference on Neural Networks (IJCNN) Proceedings. - : IEEE.
  • Konferensbidrag (refereegranskat)abstract
    • Recent advances in Deep Learning have led to a significant performance increase on several NLP tasks, however, the models become more and more computationally demanding. Therefore, this paper tackles the domain of computationally efficient algorithms for NLP tasks. In particular, it investigates distributed representations of n -gram statistics of texts. The representations are formed using hyperdimensional computing enabled embedding. These representations then serve as features, which are used as input to standard classifiers. We investigate the applicability of the embedding on one large and three small standard datasets for classification tasks using nine classifiers. The embedding achieved on par F1 scores while decreasing the time and memory requirements by several times compared to the conventional n -gram statistics, e.g., for one of the classifiers on a small dataset, the memory reduction was 6.18 times; while train and test speed-ups were 4.62 and 3.84 times, respectively. For many classifiers on the large dataset, memory reduction was ca. 100 times and train and test speed-ups were over 100 times. Importantly, the usage of distributed representations formed via hyperdimensional computing allows dissecting strict dependency between the dimensionality of the representation and n-gram size, thus, opening a room for tradeoffs.
  •  
3.
  • Balasubramaniam, Sasitharan, et al. (författare)
  • Exploiting bacterial properties for multi-hop nanonetworks
  • 2014
  • Ingår i: IEEE Communications Magazine. - Piscataway, NJ, USA : IEEE Press. - 0163-6804 .- 1558-1896. ; 52:7, s. 184-191
  • Tidskriftsartikel (refereegranskat)abstract
    • Molecular communication is a relatively new communication paradigm for nanomachines where the communication is realized by utilizing existing biological components found in nature. In recent years researchers have proposed using bacteria to realize molecular communication because the bacteria have the ability to swim and migrate between locations, carry DNA contents (i.e. plasmids) that could be utilized for information storage, and interact and transfer plasmids to other bacteria (one of these processes is known as bacterial conjugation). However, current proposals for bacterial nanonetworks have not considered the internal structures of the nanomachines that can facilitate the use of bacteria as an information carrier. This article presents the types and functionalities of nanomachines that can be utilized in bacterial nanonetworks. A particular focus is placed on the bacterial conjugation and its support for multihop communication between nanomachines. Simulations of the communication process have also been evaluated, to analyze the quantity of bits received as well as the delay performances. Wet lab experiments have also been conducted to validate the bacterial conjugation process. The article also discusses potential applications of bacterial nanonetworks for cancer monitoring and therapy. © 2014 IEEE.
  •  
4.
  • Bandaragoda, Tharindu, et al. (författare)
  • Trajectory clustering of road traffic in urban environments using incremental machine learning in combination with hyperdimensional computing
  • 2019
  • Ingår i: The 2019 IEEE Intelligent Transportation Systems Conference - ITSC. - : IEEE. - 9781538670248 - 9781538670255 ; , s. 1664-1670
  • Konferensbidrag (refereegranskat)abstract
    • Road traffic congestion in urban environments poses an increasingly complex challenge of detection, profiling and prediction. Although public policy promotes transport alternatives and new infrastructure, traffic congestion is highly prevalent and continues to be the lead cause for numerous social, economic and environmental issues. Although a significant volume of research has been reported on road traffic prediction, profiling of traffic has received much less attention. In this paper we address two key problems in traffic profiling by proposing a novel unsupervised incremental learning approach for road traffic congestion detection and profiling, dynamically over time. This approach uses (a) hyperdimensional computing to enable capture variable-length trajectories of commuter trips represented as vehicular movement across intersections, and (b) transforms these into feature vectors that can be incrementally learned over time by the Incremental Knowledge Acquiring Self-Learning (IKASL) algorithm. The proposed approach was tested and evaluated on a dataset consisting of approximately 190 million vehicular movement records obtained from 1,400 Bluetooth identifiers placed at the intersections of the arterial road network in the State of Victoria, Australia.
  •  
5.
  • Bybee, Connor, et al. (författare)
  • Efficient optimization with higher-order ising machines
  • 2023
  • Ingår i: Nature Communications. - : Nature Research. - 2041-1723. ; 14
  • Tidskriftsartikel (refereegranskat)abstract
    • A prominent approach to solving combinatorial optimization problems on parallel hardware is Ising machines, i.e., hardware implementations of networks of interacting binary spin variables. Most Ising machines leverage second-order interactions although important classes of optimization problems, such as satisfiability problems, map more seamlessly to Ising networks with higher-order interactions. Here, we demonstrate that higher-order Ising machines can solve satisfiability problems more resource-efficiently in terms of the number of spin variables and their connections when compared to traditional second-order Ising machines. Further, our results show on a benchmark dataset of Boolean k-satisfiability problems that higher-order Ising machines implemented with coupled oscillators rapidly find solutions that are better than second-order Ising machines, thus, improving the current state-of-the-art for Ising machines. 
  •  
6.
  • Coelho Mollo, Dimitri, et al. (författare)
  • Beyond the Imitation Game : Quantifying and extrapolating the capabilities of language models
  • 2023
  • Ingår i: Transactions on Machine Learning Research. ; :5
  • Tidskriftsartikel (refereegranskat)abstract
    • Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting. 
  •  
7.
  • Dhole, Kaustubh, et al. (författare)
  • NL-Augmenter : A Framework for Task-Sensitive Natural Language Augmentation
  • 2023
  • Ingår i: NEJLT Northern European Journal of Language Technology. - 2000-1533. ; 9:1, s. 1-41
  • Tidskriftsartikel (refereegranskat)abstract
    • Data augmentation is an important method for evaluating the robustness of and enhancing the diversity of training datafor natural language processing (NLP) models. In this paper, we present NL-Augmenter, a new participatory Python-based naturallanguage (NL) augmentation framework which supports the creation of transformations (modifications to the data) and filters(data splits according to specific features). We describe the framework and an initial set of117transformations and23filters for avariety of NL tasks annotated with noisy descriptive tags. The transformations incorporate noise, intentional and accidental humanmistakes, socio-linguistic variation, semantically-valid style, syntax changes, as well as artificial constructs that are unambiguousto humans. We demonstrate the efficacy of NL-Augmenter by using its transformations to analyze the robustness of popularlanguage models. We find different models to be differently challenged on different tasks, with quasi-systematic score decreases.The infrastructure, datacards, and robustness evaluation results are publicly available onGitHubfor the benefit of researchersworking on paraphrase generation, robustness analysis, and low-resource NLP.
  •  
8.
  • Diao, C., et al. (författare)
  • Generalized Learning Vector Quantization for Classification in Randomized Neural Networks and Hyperdimensional Computing
  • 2021
  • Ingår i: <em>Proceedings of the International Joint Conference on Neural Networks</em>. - : Institute of Electrical and Electronics Engineers Inc..
  • Konferensbidrag (refereegranskat)abstract
    • Machine learning algorithms deployed on edge devices must meet certain resource constraints and efficiency requirements. Random Vector Functional Link (RVFL) networks are favored for such applications due to their simple design and training efficiency. We propose a modified RVFL network that avoids computationally expensive matrix operations during training, thus expanding the network’s range of potential applications. Our modification replaces the least-squares classifier with the Generalized Learning Vector Quantization (GLVQ) classifier, which only employs simple vector and distance calculations. The GLVQ classifier can also be considered an improvement upon certain classification algorithms popularly used in the area of Hyperdimensional Computing. The proposed approach achieved state-of-the-art accuracy on a collection of datasets from the UCI Machine Learning Repository-higher than previously proposed RVFL networks. We further demonstrate that our approach still achieves high accuracy while severely limited in training iterations (using on average only 21% of the least-squares classifier computational costs). 
  •  
9.
  • Frady, E. Paxon, et al. (författare)
  • A Theory of Sequence Indexing and Working Memory in Recurrent Neural Networks
  • 2018
  • Ingår i: Neural Computation. - : MIT Press. - 0899-7667 .- 1530-888X. ; 30:6, s. 1449-1513
  • Tidskriftsartikel (refereegranskat)abstract
    • To accommodate structured approaches of neural computation, we propose a class of recurrent neural networks for indexing and storing sequences of symbols or analog data vectors. These networks with randomized input weights and orthogonal recurrent weights implement coding principles previously described in vector symbolic architectures (VSA) and leverage properties of reservoir computing. In general, the storage in reservoir computing is lossy, and cross-talk noise limits the retrieval accuracy and information capacity. A novel theory to optimize memory performance in such networks is presented and compared with simulation experiments. The theory describes linear readout of analog data and readout with winner-take-all error correction of symbolic data as proposed in VSA models. We find that diverse VSA models from the literature have universal performance properties, which are superior to what previous analyses predicted. Further, we propose novel VSA models with the statistically optimal Wiener filter in the readout that exhibit much higher information capacity, in particular for storing analog data. The theory we present also applies to memory buffers, networks with gradual forgetting, which can operate on infinite data streams without memory overflow. Interestingly, we find that different forgetting mechanisms, such as attenuating recurrent weights or neural nonlinearities, produce very similar behavior if the forgetting time constants are aligned. Such models exhibit extensive capacity when their forgetting time constant is optimized for given noise conditions and network size. These results enable the design of new types of VSA models for the online processing of data streams.
  •  
10.
  • Frady, E. P., et al. (författare)
  • Computing on Functions Using Randomized Vector Representations (in brief)
  • 2022
  • Ingår i: ACM International Conference Proceeding Series. - New York, NY, USA : Association for Computing Machinery. - 9781450395595 ; , s. 115-122
  • Konferensbidrag (refereegranskat)abstract
    • Vector space models for symbolic processing that encode symbols by random vectors have been proposed in cognitive science and connectionist communities under the names Vector Symbolic Architecture (VSA), and, synonymously, Hyperdimensional (HD) computing [22, 31, 46]. In this paper, we generalize VSAs to function spaces by mapping continuous-valued data into a vector space such that the inner product between the representations of any two data points approximately represents a similarity kernel. By analogy to VSA, we call this new function encoding and computing framework Vector Function Architecture (VFA). In VFAs, vectors can represent individual data points as well as elements of a function space (a reproducing kernel Hilbert space). The algebraic vector operations, inherited from VSA, correspond to well-defined operations in function space. Furthermore, we study a previously proposed method for encoding continuous data, fractional power encoding (FPE), which uses exponentiation of a random base vector to produce randomized representations of data points and fulfills the kernel properties for inducing a VFA. We show that the distribution from which components of the base vector are sampled determines the shape of the FPE kernel, which in turn induces a VFA for computing with band-limited functions. In particular, VFAs provide an algebraic framework for implementing large-scale kernel machines with random features, extending [51]. Finally, we demonstrate several applications of VFA models to problems in image recognition, density estimation and nonlinear regression. Our analyses and results suggest that VFAs constitute a powerful new framework for representing and manipulating functions in distributed neural systems, with myriad potential applications in artificial intelligence.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 74
Typ av publikation
tidskriftsartikel (41)
konferensbidrag (28)
rapport (2)
annan publikation (1)
doktorsavhandling (1)
licentiatavhandling (1)
visa fler...
visa färre...
Typ av innehåll
refereegranskat (68)
övrigt vetenskapligt/konstnärligt (5)
populärvet., debatt m.m. (1)
Författare/redaktör
Kleyko, Denis (50)
Osipov, Evgeny (43)
Kleyko, Denis, 1990- (20)
Wiklund, Urban (11)
Lyamin, Nikita, 1989 ... (5)
Vyatkin, Valeriy (4)
visa fler...
De Silva, Daswin (4)
Alahakoon, Damminda (4)
Olshausen, B. A. (4)
Gayler, Ross W. (4)
Kanerva, Pentti (4)
Frady, E. Paxon (4)
Sommer, Friedrich T. (4)
Vinel, Alexey, 1983- (3)
Papakonstantinou, Ni ... (3)
Bybee, Connor (3)
Sommer, F. T. (3)
Lyamin, Nikita (3)
Frady, Edward Paxon (3)
Hostettler, Roland (2)
Birk, Wolfgang (2)
Riliskis, Laurynas (2)
Khosrowshahi, A. (2)
Frady, E. P. (2)
Kymn, C. J. (2)
Koucheryavy, Yevgeni (1)
Abdukalikova, Anara (1)
Liwicki, Marcus (1)
Coelho Mollo, Dimitr ... (1)
Alonso, Pedro, 1986- (1)
Shridhar, Kumar (1)
Zhang, Yue (1)
Nilsson, Joakim (1)
Pang, Zhibo (1)
Balasubramaniam, Sas ... (1)
Skurnik, Mikael (1)
Bandaragoda, Tharind ... (1)
Patil, Sandeep (1)
Mousavi, Arash (1)
Björk, Magnus (1)
Nikonov, Dmitri E (1)
Srivastava, Aarohi (1)
Wu, Ziyi (1)
Delooz, Quentin, 199 ... (1)
Dhole, Kaustubh (1)
Diao, C. (1)
Rabaey, J. M. (1)
Dyer, Adrian G (1)
Kahawala, Sachin (1)
Grytsenko, Vladimir ... (1)
visa färre...
Lärosäte
Luleå tekniska universitet (57)
RISE (26)
Umeå universitet (12)
Högskolan i Halmstad (5)
Örebro universitet (3)
Språk
Engelska (74)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (66)
Teknik (26)
Medicin och hälsovetenskap (3)
Humaniora (1)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy