Sunday, December 22, 2024
HomeAmazon PrimeA fast information to Amazon’s papers at NeurIPS 2023

A fast information to Amazon’s papers at NeurIPS 2023

[ad_1]

The Convention on Neural Data Processing Programs (NeurIPS) takes place this week, and the Amazon papers accepted there contact on a variety of matters, from experimental design and human-robot interplay to recommender techniques and real-time statistical estimation. Amid that range, a couple of matters are available for specific consideration: optimization, privateness, tabular knowledge, time collection forecasting, vision-language fashions — and notably reinforcement studying.

Code era

Massive language fashions of code fail at finishing code with potential bugs
Tuan Dinh, Jinman Zhao, Samson Tan, Renato Negrinho, Leonard Lausen, Sheng Zha, George Karypis

Complicated question answering

Complicated question answering on eventuality data graph with implicit logical constraints
Jiaxin Bai, Xin Liu, Weiqi Wang, Chen Luo, Yangqiu Music

An instance of a fancy eventuality question, with its computational and informational atomics. V is one thing that occurs earlier than an individual complains and leaves a restaurant; in accordance with the data graph, V might be both “Service is unhealthy” or “Meals is unhealthy”. If V? is the explanation for V, then in accordance with the graph, V? might be both “Employees is new”, “PersonY provides ketchup”, “PersonY provides soy sauce”, or “PersonY provides vinegar”. Nonetheless, from the question, we all know that “PersonY provides vinegar” doesn’t occur, and “PersonY provides soy sauce” occurs after “meals is unhealthy”, so it may well’t be the explanation for “meals is unhealthy”. From “Complicated question answering on eventuality data graph with implicit logical constraints”.

Experimental design

Experimental designs for heteroskedastic variance
Justin Weltz, Tanner Fiez, Eric Laber, Alexander Volfovsky, Blake Mason, Houssam Nassif, Lalit Jain

Federated studying

Federated multi-objective studying
Haibo Yang, Zhuqing Liu, Jia Liu, Chaosheng Dong, Michinari Momma

Human-robot interplay

Alexa Enviornment: A user-centric interactive platform for embodied AI
Qiaozi (QZ) Gao, Govind Thattai, Suhaila Shakiah, Xiaofeng Gao, Shreyas Pansare, Vasu Sharma, Gaurav Sukhatme, Hangjie Shi, Bofei Yang, Desheng Zhang, Lucy Hu, Karthika Arumugam, Shui Hu, Matthew Wen, Dinakar Guthy, Cadence Chung, Rohan Khanna, Osman Ipek, Leslie Ball, Kate Bland, Heather Rocker, Michael Johnston, Reza Ghanadan, Dilek Hakkani-Tür, Prem Natarajan

Optimization

Bounce: Dependable high-dimensional Bayesian optimization for combinatorial and blended areas
Leonard Papenmeier, Luigi Nardi, Matthias Poloczek

Debiasing conditional stochastic optimization
Lie He, Shiva Kasiviswanathan

Distributionally strong Bayesian optimization with ϕ-divergences
Hisham Husain, Vu Nguyen, Anton van den Hengel

Ordinal classification

Conformal prediction units for ordinal classification
Prasenjit Dey, Srujana Merugu, Sivaramakrishnan (Siva) Kaveri

Privateness

Making a public repository for becoming a member of non-public knowledge
James Prepare dinner, Milind Shyani, Nina Mishra

A stylized illustration of the repository drawback. The sender S uploads a personal rely sketch capturing which individuals do and wouldn’t have most cancers. The receiver R makes use of the sketch to embellish her knowledge (folks’s work areas) with a loud model of S’s most cancers column. Two noisy columns are generated: one for most cancers (+1) and one for not (−1). R can then construct a machine studying mannequin to foretell whether or not workers who work close to a poisonous waste website usually tend to develop most cancers. From “Making a public repository for becoming a member of non-public knowledge”.

Scalable membership inference assaults through quantile regression
Martin Bertran Lopez, Shuai Tang, Michael Kearns, Jamie Morgenstern, Aaron Roth, Zhiwei Steven Wu

Actual-time statistical estimation

On-line strong non-stationary estimation
Abishek Sankararaman, Balakrishnan (Murali) Narayanaswamy

Recommender techniques

Enhancing consumer intent seize in session-based suggestion with attribute patterns
Xin Liu, Zheng Li, Yifan Gao, Jingfeng Yang, Tianyu Cao, Zhengyang Wang, Bing Yin, Yangqiu Music

Reinforcement studying

Budgeting counterfactual for offline RL
Yao Liu, Pratik Chaudhari, Rasool Fakoor

Finite-time logarithmic Bayes remorse higher bounds
Alexia Atsidakou, Branislav Kveton, Sumeet Katariya, Constantine Caramanis, Sujay Sanghavi

Resetting the optimizer in deep RL: An empirical research
Kavosh Asadi, Rasool Fakoor, Shoham Sabach

TD convergence: An optimization perspective
Kavosh Asadi, Shoham Sabach, Yao Liu, Omer Gottesman, Rasool Fakoor

Accountable AI

Enhancing equity for spoken language understanding in atypical speech with text-to-speech
Helin Wang, Venkatesh Ravichandran, Milind Rao, Becky Lammers, Myra Sydnor, Nicholas Maragakis, Ankur A. Butala, Jayne Zhang, Victoria Chovaz, Laureano Moro-Velazquez

Tabular knowledge

An inductive bias for tabular deep studying
Ege Beyazit, Jonathan Kozaczuk, Bo Li, Vanessa Wallace, Bilal Fadlallah

HYTREL: Hypergraph-enhanced tabular knowledge illustration studying
Pei Chen, Soumajyoti Sarkar, Leonard Lausen, Balasubramaniam Srinivasan, Sheng Zha, Ruihong Huang, George Karypis

An instance of modeling a desk as a hypergraph. Cells make up the nodes, and the cells in every row, every column, and all the desk kind hyperedges. The desk caption and the header names present the names of the desk and column hyperedges. The hypergraph retains the 4 structural properties of tables — i.e., row/column permutations end in the identical hypergraph. From “HYTREL: Hypergraph-enhanced tabular knowledge illustration studying”.

Time collection forecasting

Predict, refine, synthesize: Self-guiding diffusion fashions for probabilistic time collection forecasting
Marcel Kollovieh, Abdul Fatir Ansari, Michael Bohlke-Schneider, Jasper Zschiegner, Hao Wang, Yuyang (Bernie) Wang

PreDiff: Precipitation nowcasting with latent diffusion fashions
Zhihan Gao, Xingjian Shi, Boran Han, Hao Wang, Xiaoyong Jin, Danielle Maddix Robinson, Yi Zhu, Mu Li, Yuyang (Bernie) Wang

Imaginative and prescient-language fashions

Immediate pre-training with twenty-thousand courses for open-vocabulary visible recognition
Shuhuai Ren, Aston Zhang, Yi Zhu, Shuai Zhang, Shuai Zheng, Mu Li, Alex Smola, Xu Su

Your representations are within the community: Composable and parallel adaptation for giant scale fashions
Yonatan Dukler, Alessandro Achille, Hao Yang, Ben Bowman, Varsha Vivek, Luca Zancato, Avinash Ravichandran, Charless Fowlkes, Ashwin Swaminathan, Stefano Soatto



[ad_2]

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments