METELLI, ALBERTO MARIA
 Distribuzione geografica
Continente #
AS - Asia 3.795
NA - Nord America 3.429
EU - Europa 3.131
SA - Sud America 658
AF - Africa 124
OC - Oceania 17
Continente sconosciuto - Info sul continente non disponibili 3
Totale 11.157
Nazione #
US - Stati Uniti d'America 3.315
RU - Federazione Russa 1.497
SG - Singapore 1.335
CN - Cina 994
IT - Italia 666
BR - Brasile 543
VN - Vietnam 506
KR - Corea 275
JP - Giappone 235
FR - Francia 183
DE - Germania 173
HK - Hong Kong 131
GB - Regno Unito 113
NL - Olanda 98
FI - Finlandia 92
CA - Canada 72
SE - Svezia 64
IN - India 62
ID - Indonesia 61
ES - Italia 59
IE - Irlanda 51
AR - Argentina 49
MA - Marocco 48
BD - Bangladesh 34
AT - Austria 30
IQ - Iraq 30
PL - Polonia 27
CI - Costa d'Avorio 23
MX - Messico 23
TW - Taiwan 20
EC - Ecuador 16
AU - Australia 15
CH - Svizzera 15
JO - Giordania 14
UA - Ucraina 14
PK - Pakistan 12
PY - Paraguay 12
ZA - Sudafrica 11
CO - Colombia 10
TR - Turchia 10
UZ - Uzbekistan 10
VE - Venezuela 10
CZ - Repubblica Ceca 8
KE - Kenya 8
AE - Emirati Arabi Uniti 7
BJ - Benin 7
CL - Cile 7
DZ - Algeria 7
KZ - Kazakistan 7
NP - Nepal 7
PH - Filippine 7
IR - Iran 6
SA - Arabia Saudita 6
TH - Thailandia 6
BE - Belgio 5
DO - Repubblica Dominicana 5
GR - Grecia 5
LT - Lituania 5
PE - Perù 5
EG - Egitto 4
ET - Etiopia 4
JM - Giamaica 4
PT - Portogallo 4
RO - Romania 4
UY - Uruguay 4
AZ - Azerbaigian 3
KG - Kirghizistan 3
KW - Kuwait 3
LV - Lettonia 3
PA - Panama 3
TN - Tunisia 3
AL - Albania 2
AO - Angola 2
BO - Bolivia 2
EU - Europa 2
HN - Honduras 2
IL - Israele 2
LB - Libano 2
MD - Moldavia 2
MY - Malesia 2
NO - Norvegia 2
SI - Slovenia 2
BY - Bielorussia 1
CG - Congo 1
CM - Camerun 1
CR - Costa Rica 1
EE - Estonia 1
GE - Georgia 1
HR - Croazia 1
HU - Ungheria 1
IS - Islanda 1
KH - Cambogia 1
MM - Myanmar 1
MT - Malta 1
MU - Mauritius 1
MV - Maldive 1
MZ - Mozambico 1
NC - Nuova Caledonia 1
NI - Nicaragua 1
NZ - Nuova Zelanda 1
Totale 11.148
Città #
Singapore 726
Ashburn 720
San Jose 483
Milan 312
Hefei 291
Seoul 259
Santa Clara 208
Tokyo 202
Moscow 180
Chandler 178
Beijing 146
Ho Chi Minh City 126
Dallas 122
Hanoi 115
Los Angeles 114
Council Bluffs 105
Hong Kong 102
Lauterbourg 85
The Dalles 84
Boardman 72
North Charleston 67
Frankfurt am Main 63
Kent 62
Fairfield 61
New York 54
Helsinki 52
São Paulo 52
Amsterdam 50
Dublin 47
London 47
Jakarta 44
Da Nang 42
Buffalo 37
Stockholm 35
Las Vegas 32
Redmond 31
Seattle 30
Kenitra 29
Redwood City 28
Haiphong 27
Málaga 27
Chicago 24
Wilmington 24
Düsseldorf 23
Lappeenranta 23
Lawrence 23
Orem 23
Abidjan 22
Redondo Beach 21
Shanghai 21
Warsaw 20
Rome 19
Atlanta 18
Montreal 18
Ottawa 18
Guangzhou 17
Rio de Janeiro 16
Tianjin 16
Vienna 16
Woodbridge 16
Cambridge 15
Medford 15
Amman 14
Munich 14
Taipei 14
Biên Hòa 13
Brooklyn 13
Casablanca 13
Dong Ket 13
Nuremberg 13
Turku 13
Changsha 12
Curitiba 12
Phoenix 12
Pincourt 12
Toronto 12
Ann Arbor 11
Borgomanero 11
Boston 11
Brasília 11
Campinas 11
Monza 11
Belo Horizonte 10
Houston 10
Zurich 10
Baghdad 9
Chennai 9
Pittsburgh 9
Tashkent 9
Hải Dương 8
Mexico City 8
Princeton 8
Ribeirão Preto 8
Sant'ambrogio Di Torino 8
Xi'an 8
Adelaide 7
Cotonou 7
Denver 7
Poplar 7
San Diego 7
Totale 6.350
Nome #
Stochastic Rising Bandits 287
An Option-Dependent Analysis of Regret Minimization Algorithms in Finite-Horizon Semi-MDP 239
On the use of the policy gradient and Hessian in inverse reinforcement learning 226
Wasserstein Actor-Critic: Directed Exploration via Optimism for Continuous-Actions Control 220
Optimistic Policy Optimization via Multiple Importance Sampling 205
Feature Selection via Mutual Information: New Theoretical Insights 201
Combining reinforcement learning with rule-based controllers for transparent and general decision-making in autonomous driving 197
Dealing with multiple experts and non-stationarity in inverse reinforcement learning: an application to real-life problems 189
A Provably Efficient Option-Based Algorithm for both High-Level and Low-Level Learning 178
Configurable Markov Decision Processes 172
Gradient-Aware Model-Based Policy Search 171
Propagating Uncertainty in Reinforcement Learning via Wasserstein Barycenters 170
Best Arm Identification for Stochastic Rising Bandits 168
Convergence Analysis of Policy Gradient Methods with Dynamic Stochasticity 167
Balancing Sample Efficiency and Suboptimality in Inverse Reinforcement Learning 164
ARLO: A framework for Automated Reinforcement Learning 160
Control Frequency Adaptation via Action Persistence in Batch Reinforcement Learning 157
Autoregressive Bandits 156
Simultaneously Updating All Persistence Values in Reinforcement Learning 149
Advancing drought monitoring via feature extraction and multi-task learning algorithms 149
IWDA: Importance Weighting for Drift Adaptation in Streaming Supervised Learning Problems 149
Switching Latent Bandits 148
Optimizing Empty Container Repositioning and Fleet Deployment via Configurable Semi-POMDPs 147
Advancing drought monitoring via feature extraction 147
Dynamical Linear Bandits 145
A Tale of Sampling and Estimation in Discounted Reinforcement Learning 145
Trust Region Meta Learning for Policy Optimization 142
Importance Sampling Techniques for Policy Optimization 142
Switching Latent Bandits 139
Truly Batch Model-Free Inverse Reinforcement Learning about Multiple Intentions 139
Content-based approaches for cold-start job recommendations 139
A Provably Efficient Option-Based Algorithm for both High-Level and Low-Level Learning 138
A unified view of configurable Markov Decision Processes: Solution concepts, value functions, and operators 136
Factored-Reward Bandits with Intermediate Observations 134
Compatible Reward Inverse Reinforcement Learning 132
Truncating Trajectories in Monte Carlo Reinforcement Learning 131
Towards Theoretical Understanding of Inverse Reinforcement Learning 131
Provably Efficient Learning of Transferable Rewards 130
Policy optimization via importance sampling 129
Graph-Triggered Rising Bandits 129
Last-Iterate Global Convergence of Policy Gradients for Constrained Reinforcement Learning 128
Configurable Environments in Reinforcement Learning: An Overview 128
Tight Performance Guarantees of Imitator Policies with Continuous Actions 127
Policy Optimization as Online Learning with Mediator Feedback 126
Sleeping Reinforcement Learning 125
Local Linearity: the Key for No-regret Reinforcement Learning in Continuous MDPs 124
Sample complexity of variance-reduced policy gradient: weaker assumptions and lower bounds 122
Causal feature selection via transfer entropy 122
State and Action Factorization in Power Grids 122
Learning Optimal Deterministic Policies with Stochastic Policy Gradients 121
Transfer Learning for Dynamical Systems Models via Autoencoders and GANs 121
Interpretable Machine Learning for Extreme Events detection: An application to droughts in the Po River Basin 121
Lifelong Hyper-Policy Optimization with Multiple Importance Sampling Regularization 121
Exploiting environment configurability in reinforcement learning 114
(ϵ, u)-Adaptive Regret Minimization in Heavy-Tailed Bandits 113
Multi-Fidelity Best-Arm Identification 112
Dissimilarity Bandits 111
Online Learning with Off-Policy Feedback in Adversarial MDPs 110
Interpretable linear dimensionality reduction based on bias-variance analysis 108
Truncating Trajectories in Monte Carlo Policy Evaluation: an Adaptive Approach 107
Interpretable Target-Feature Aggregation for Multi-task Learning Based on Bias-Variance Analysis 100
No-Regret Reinforcement Learning in Smooth MDPs 100
Policy space identification in configurable environments 100
Information-Theoretic Regret Bounds for Bandits with Fixed Expert Advice 99
Storehouse: a Reinforcement Learning Environment for Optimizing Warehouse Management 99
Safe policy iteration: A monotonically improving approximate policy iteration approach 99
Projection by Convolution: Optimal Sample Complexity for Reinforcement Learning in Continuous-Space MDPs 98
On the Relation between Policy Improvement and Off-Policy Minimum-Variance Policy Evaluation 97
Learning in Non-Cooperative Configurable Markov Decision Processes 96
Offline Inverse RL: New Solution Concepts and Provably Efficient Algorithms 95
Distributional Policy Evaluation: a Maximum Entropy approach to Representation Learning 94
Reinforcement Learning in Configurable Continuous Environments 93
Parameterized Projected Bellman Operator 91
Position: Constants are Critical in Regret Bounds for Reinforcement Learning 91
Factored-Reward Bandits with Intermediate Observations: Regret Minimization and Best Arm Identification 90
Sub-optimal Experts mitigate Ambiguity in Inverse Reinforcement Learning 89
How does Inverse RL Scale to Large State Spaces? A Provably Efficient Approach 88
The Power of Hybrid Learning in Industrial Robotics: Efficient Grasping Strategies with Supervised-Driven Reinforcement Learning 87
Subgaussian and Differentiable Importance Sampling for Off-Policy Evaluation and Learning 86
Towards Theoretical Understanding of Sequential Decision Making with Preference Feedback 84
Optimal multi-fidelity best-arm identification 80
Achieving \mathcal{O}(\sqrt{T}) Regret in Average-Reward POMDPs with Known Observation Models 78
Learning Utilities from Demonstrations in Markov Decision Processes 66
Search or split: policy gradient with adaptive policy space 53
Power Grid Control with Graph-Based Distributed Reinforcement Learning 51
Policy Gradient Methods with Adaptive Policy Spaces 46
Trading-off Reward Maximization and Stability in Sequential Decision Making 40
Recent Advancements in Inverse Reinforcement Learning 39
Human-AI interaction in safety-critical network infrastructures 18
Generalizing the Regret: an Analysis of Lower and Upper Bounds 16
Tightening Regret Lower and Upper Bounds in Restless Rising Bandits 16
Catoni-Style Change Point Detection for Regret Minimization in Piecewise-Stationary Heavy-Tailed Bandits 13
AReS: A patient simulator to facilitate testing of automated anesthesia 11
Information Capacity Regret Bounds for Bandits with Mediator Feedback 9
Minimax off-policy evaluation and learning with subgaussian and differentiable importance weighting 9
Achieving Orp?Tq Regret in Average-Reward POMDPs with Known Observation Models 7
Efficient Exploitation of Hierarchical Structure in Sparse Reward Reinforcement Learning 4
Totale 11.312
Categoria #
all - tutte 28.426
article - articoli 6.818
book - libri 396
conference - conferenze 20.756
curatela - curatele 0
other - altro 0
patent - brevetti 0
selected - selezionate 0
volume - volumi 456
Totale 56.852


Totale Lug Ago Sett Ott Nov Dic Gen Feb Mar Apr Mag Giu
2020/2021114 0 0 0 0 0 0 0 0 0 28 19 67
2021/2022217 19 27 15 14 17 7 23 12 18 16 20 29
2022/2023424 37 46 19 55 40 35 2 32 58 49 38 13
2023/2024533 49 71 73 32 20 64 21 43 6 52 8 94
2024/20252.140 26 33 43 57 262 145 72 142 271 181 442 466
2025/20267.563 1.327 1.291 624 732 440 389 1.106 385 596 673 0 0
Totale 11.312