She has more than 40 publications in international journals and conferences. Chulin Xie, Keli Huang, Pinyu Chen, and Bo Li. In most cases, the notebooks lead you through implementing models such as convolutional networks, recurrent networks, and GANs. arXiv, 2020. arXiv, 2022. [pdf], A Temporal Chrominance Trigger for Clean-label Backdoor Attack against Anti-spoof Rebroadcast Detection. Autoregressive modeling is used to predict the next word using the context words occurring either before or after the missing word in question. An autoencoder is composed of an encoder and a decoder sub-models. Lun Wang, Zaynah Javed, Xian Wu, Wenbo Guo, Xinyu Xing, and Dawn Song. [code], Trembling Triggers: Exploring the Sensitivity of Backdoors in DNN-based Face Recognition. [pdf], Invisible Backdoor Attack with Sample-Specific Triggers. [pdf], Backdoors Stuck at The Frontdoor: Multi-Agent Backdoor Attacks That Backfire. [pdf], Deep Partition Aggregation: Provable Defense against General Poisoning Attacks arXiv, 2022. [pdf], Quantization Backdoors to Deep Learning Models. The deep learning-based transfer learning method, also denoted as deep transfer learning, has shown great merits in establishing generalized model. Zaixi Zhang, Jinyuan Jia, Binghui Wang, and Neil Zhenqiang Gong. arXiv, 2022. [pdf], Backdoor Attacks Against Deep Learning Systems in the Physical World. Each directory has a requirements.txt describing the minimal dependencies required to run the notebooks in that directory. [code], Hidden Backdoors in Human-Centric Language Models. Yuhua Sun, Tailai Zhang, Xingjun Ma, Pan Zhou, Jian Lou, Zichuan Xu, Xing Di, Yu Cheng, and Lichao Sun. The NABoE model performs particularly well on Text Classification tasks: Now, it might appear counter-intuitive to study all these advanced pretrained models and at the end, discuss a model that uses plain (relatively) old Bidirectional LSTM to achieve SOTA performance. Mary Roszel, Robert Norvill, and Radu State. [pdf], What Do You See? If our repo or survey is useful for your research, please cite our paper as follows: Please help to contribute this list by contacting me or add pull request, Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review. An autoencoder is composed of an encoder and a decoder sub-models. [pdf], BlockFLA: Accountable Federated Learning via Hybrid Blockchain Architecture. Yansong Gao, Chang Xu, Derui Wang, Shiping Chen, Damith C. Ranasinghe, and Surya Nepal. [pdf] There can be 2 types of edges: Step 3: Perform self-attention on each node of the graph with its neighboring nodes: I appreciate this model in the sense that it made me revisit the concept of graphs and made me venture into looking up graph neural networks. [pdf], Red Alarm for Pre-trained Models: Universal Vulnerabilities by Neuron-Level Backdoor Attacks. Aniruddha Saha, Ajinkya Tejankar, Soroush Abbasi Koohpayegani, and Hamed Pirsiavash. arXiv, 2022. And yes, the advent of transfer learning has definitely helped accelerate the research. [pdf], What Do Deep Nets Learn? [pdf] Omid Aramoon, Pin-Yu Chen, Gang Qu, and Yuan Tian. arXiv, 2022. [pdf] We present a novel masked image modeling (MIM) approach, context autoencoder (CAE), for self-supervised representation pretraining. [pdf], Textual Backdoor Attacks Can Be More Harmful via Two Simple Tricks. [pdf], Attack-Resistant Federated Learning with Residual-based Reweighting. [pdf], Toward Robustness and Privacy in Federated Learning: Experimenting with Local and Central Differential Privacy. arXiv, 2022. Yingqi Liu, Guangyu Shen, Guanhong Tao, Zhenting Wang, Shiqing Ma, and Xiangyu Zhang. Han Qiu, Yi Zeng, Shangwei Guo, Tianwei Zhang, Meikang Qiu, and Bhavani Thuraisingham. The reader is encouraged to play around with the network architecture and hyperparameters to improve the reconstruction quality and the loss values. [pdf], Detecting Backdoored Neural Networks with Structured Adversarial Attacks. arXiv, 2021. [extension] [pdf], Stealthy and Flexible Trojan in Deep Learning Framework. Elan Rosenfeld, Ezra Winston, Pradeep Ravikumar, and J. Zico Kolter. Training a deep autoencoder or a classifier on MNIST digits - Training a deep autoencoder or a classifier on MNIST digits[DEEP LEARNING]. [pdf] [pdf], BadNL: Backdoor Attacks against NLP Models with Semantic-preserving Improvements. Gorka Abad, Servio Paguada, Stjepan Picek, Vctor Julio Ramrez-Durn, and Aitor Urbieta. [link], Machine Learning with Electronic Health Records is vulnerable to Backdoor Trigger Attacks. [pdf] The model uses learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Yingzhe He, Guozhu Meng, Kai Chen, Jinwen He, and Xingbo Hu. A combination of Bidirectional LSTM and Regularization is able to achieve SOTA performance on the IMDb document classification task and stands shoulder-to-shoulder with other bigwigs in this domain. It is able to learn a function that a set of 256x256-pixel face images, for example, to a vector of length 100, and also the inverse function that transforms the vector back into a [code], Bypassing Backdoor Detection Algorithms in Deep Learning. [link], Backdoor Attacks against Transfer Learning with Pre-trained Deep Learning Models. [code], Few-shot Backdoor Defense Using Shapley Estimation. [pdf], AEVA: Black-box Backdoor Detection Using Adversarial Extreme Value Analysis. [dataset], Noise-response Analysis for Rapid Detection of Backdoors in Deep Neural Networks. arXiv, 2021. [pdf], Backdoor Defense via Decoupling the Training Process. [code], Anti-Backdoor Learning: Training Clean Models on Poisoned Data. [code], PICCOLO : Exposing Complex Backdoors in NLP Transformer Models. Yunjie Ge, Qian Wang, Baolin Zheng, Xinlu Zhuang, Qi Li, Chao Shen, and Cong Wang. Wenkai Yang, Lei Li, Zhiyuan Zhang, Xuancheng Ren, Xu Sun, and Bin He. Federated learning brings machine learning models to the data source, rather than bringing the data to the model. [pdf], A Comprehensive Survey on Poisoning Attacks and Countermeasures in Machine Learning. [pdf] If a machine can differentiate between a noun and a verb, or if it can detect a customers satisfaction with the product in his/her review, we can use this understanding for other advanced NLP tasks like understanding context or even generating a brand new story! [link], Can You Hear It? Mingjie Sun, Siddhant Agarwal, and J. Zico Kolter. Sparse autoencoder based feature transfer learning for speech emotion recognition. [pdf], Model Agnostic Defense against Backdoor Attacks in Machine Learning. [pdf], Watermarking Pre-trained Encoders in Contrastive Learning. [pdf], DeepSweep: An Evaluation Framework for Mitigating DNN Backdoor Attacks using Data Augmentation. This repository contains material related to Udacity's Deep Learning Nanodegree Foundation program. Deep Learning Nanodegree Foundation. arXiv, 2020. Kunzhe Huang, Yiming Li, Baoyuan Wu, Zhan Qin, and Kui Ren. Haoqi Wang, Mingfu Xue, Shichang Sun, Yushu Zhang, Jian Wang, and Weiqiang Liu. The goal is to pretrain an encoder by solving the pretext task: estimate the masked patches from the visible patches in an image. The neurons in the hidden layer contain the Gaussian transfer functions, which have outputs that are inversely proportional to the distance from the neuron's center. arXiv, 2020. [pdf], NeuronInspect: Detecting Backdoors in Neural Networks via Output Explanations. [pdf], TAD: Trigger Approximation based Black-box Trojan Detection for AI. Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Esha Sarkar, Hadjer Benkraouda, and Michail Maniatakos. November 8, 2016. Yiming Li, Haoxiang Zhong, Xingjun Ma, Yong Jiang, and Shu-Tao Xia. arXiv, 2020. [pdf], Dual-Key Multimodal Backdoors for Visual Question Answering. We are now able to use a pre-existing model built on a huge dataset and tune it to achieve other tasks on a different dataset. [code], Protecting Deep Cerebrospinal Fluid Cell Image Processing Models with Backdoor and Semi-Distillation. Unlike a traditional autoencoder, which maps the input onto a latent vector, a VAE maps the input data into the parameters of a probability distribution, such as the mean and variance of a Gaussian. [link] [pdf], SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with Sparsification. Guiyu Tian, Wenhao Jiang, Wei Liu, and Yadong Mu. In terms of a medical federated learning model, multiple hospitals could team up to train a common model that could recognize potential tumors through medical scans. [pdf], Towards Effective and Robust Neural Trojan Defenses via Input Filtering. 20180428 IJCAI-18 knowledge distilationtransfer learningBetter and Faster: Knowledge Transfer from Multiple Self-supervised Learning Tasks via Graph Distillation for Video Classification [pdf], Adversarial Unlearning of Backdoors via Implicit Hypergradient. Yinghua Gao, Dongxian Wu, Jingfeng Zhang, Guanhao Gan, Shu-Tao Xia, Gang Niu, and Masashi Sugiyama. [code], Invisible Poison: A Blackbox Clean Label Backdoor Attack to Deep Neural Networks. CVPR 2022 papers with code (. [pdf] [pdf], Can We Mitigate Backdoor Attack Using Adversarial Detection Methods? arXiv, 2022. There are also notebooks used as projects for the Nanodegree program. [code], DriNet: Dynamic Backdoor Attack against Automatic Speech Recognization Models. [pdf], FL-Defender: Combating Targeted Attacks in Federated Learning. Yuanchun Li, Jiayi Hua, Haoyu Wang, Chunyang Chen, and Yunxin Liu. Zhicong Yan, Gaolei Li, Yuan Tian, Jun Wu, Shenghong Li, Mingzhe Chen, and H. Vincent Poor. [code], Data-free Backdoor Removal based on Channel Lipschitzness. [pdf], Auditing Visualizations: Transparency Methods Struggle to Detect Anomalous Behavior. 20180428 IJCAI-18 knowledge distilationtransfer learningBetter and Faster: Knowledge Transfer from Multiple Self-supervised Learning Tasks via Graph Distillation for Video Classification contribute: Super Resolution with sub-pixel CNN: Shi et al. [pdf], Threats to Pre-trained Language Models: Survey and Taxonomy. [code], Towards Inspecting and Eliminating Trojan Backdoors in Deep Neural Networks. [pdf], Turning a Curse Into a Blessing: Enabling Clean-Data-Free Defenses by Model Inversion. Tianlong Chen, Zhenyu Zhang, Yihua Zhang*, Shiyu Chang, Sijia Liu, and Zhangyang Wang. [link], Similarity-based Integrity Protection for Deep Learning Systems. Fanchao Qi, Yangyi Chen, Xurui Zhang, Mukai Li, Zhiyuan Liu, and Maosong Sun. [pdf], Turning Your Weakness into a Strength: Watermarking Deep Neural Networks by Backdooring. What is an adversarial example? Shang Wang, Yansong Gao, Anmin Fu, Zhi Zhang, Yuqing Zhang, and Willy Susilo. [Master Thesis], Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning.. Reinforcement learning differs from supervised learning Kuofeng Gao, Jiawang Bai, Bin Chen, Dongxian Wu, and Shu-Tao Xia. [pdf], TrojViT: Trojan Insertion in Vision Transformers. [pdf], Anti-Distillation Backdoor Attacks: Backdoors Can Really Survive in Knowledge Distillation. So, even for a classification task, the input will be text, and the output will again be a word instead of a label. Zhi Chen, Yadan Luo, Sen Wang, Jingjing Li, and Zi Huang. A variational autoencoder learns a low-dimensional representation of the important information in its training data. arXiv, 2022. [pdf], Clean-label Backdoor Attack against Deep Hashing based Retrieval. [code], Rethinking the Backdoor Attacks' Triggers: A Frequency Perspective. Zhenting Wang, Hailun Ding, Juan Zhai, and Shiqing Ma. arXiv, 2020. ebastien Andreina, Giorgia Azzurra Marson, Helen Mllering, and Ghassan Karame. [pdf], PerD: Perturbation Sensitivity-based Neural Trojan Detection Framework on NLP Applications. [pdf], Topological Detection of Trojaned Neural Networks. [pdf], Imperceptible and Multi-channel Backdoor Attack against Deep Neural Networks. Diego Garcia-soto, Huili Chen, and Farinaz Koushanfar. Googles new Text-to-Text Transfer Transformer (T5) model uses transfer learning for a variety of NLP tasks. arXiv, 2020. Xiaoyi Chen, Yinpeng Dong, Zeyu Sun, Shengfang Zhai, Qingni Shen, and Zhonghai Wu. arXiv, 2022. [link], CRFL: Certifiably Robust Federated Learning against Backdoor Attacks. Hua Ma, Yinshan Li, Yansong Gao, Alsharif Abuadbba, Zhi Zhang, Anmin Fu, Hyoungshick Kim, Said F. Al-Sarawi, Nepal Surya, and Derek Abbott. arXiv, 2021. arXiv, 2022. The corpus uses an enhanced version of Common Crawls. Jun Yan, Vansh Gupta, and Xiang Ren. [pdf], LIRA: Learnable, Imperceptible and Robust Backdoor Attacks. Are you sure you want to create this branch? Transfer Learning [10, 11] is another interesting paradigm to prevent overfitting. November 7, 2016 November 2, 2016. [pdf], Scalable Backdoor Detection in Neural Networks. [code], Dynamic Backdoor Attacks Against Machine Learning Models. There was a problem preparing your codespace, please try again. arXiv, 2021. Akshayvarun Subramanya, Aniruddha Saha, Soroush Abbasi Koohpayegani, Ajinkya Tejankar, and Hamed Pirsiavash. Khoa Doan, Yingjie Lao, Weijie Zhao, and Ping Li. Though BERTs autoencoder did take care of this aspect, it did have other disadvantages like assuming no correlation between the masked words. Analytics Vidhya App for the Latest blog/Article, An Essential Guide to Pretrained Word Embeddings for NLP Practitioners, TensorFlow 2.0 Tutorial for Deep Learning, We use cookies on Analytics Vidhya websites to deliver our services, analyze web traffic, and improve your experience on the site. For example, the output of Task 1 is used as training for Task 1, Task 2; output for this, Task 1 and Task 2 is used for training Tasks 1, 2 and 3, and so on, The edge connecting a parent node to its children, The edge connecting the leaf nodes with other nodes, SOTA performance on Chinese to English Machine Translation (BLEU score: 19.84), Accuracy of 92.12 for Sentiment Analysis on the IMDb dataset (combined with, Using a Neural network to detect the entities, Using the attention mechanism to compute the weights on the detected entities (this decides the relevance of the entities for the document in question), It is the first paper to use a combination of LSTM + regularization for document classification.
Superfoil Wall Insulation,
International Motorcycle Show 2023,
Zero Sequence Voltage,
Gold Brass Vs Yellow Brass,
Rhetorical Articles Examples,
Magical Objects In Fantasy,
Digital Driver's License Maryland,
Iraqi Population In Turkey,
Matplotlib Patchcollection Example,
Wisconsin Digital Driver's License,