跳转到主要内容

标签(标签)

资源精选(342) Go开发(108) Go语言(103) Go(99) angular(82) LLM(75) 大语言模型(63) 人工智能(53) 前端开发(50) LangChain(43) golang(43) 机器学习(39) Go工程师(38) Go程序员(38) Go开发者(36) React(33) Go基础(29) Python(24) Vue(22) Web开发(20) Web技术(19) 精选资源(19) 深度学习(19) Java(18) ChatGTP(17) Cookie(16) android(16) 前端框架(13) JavaScript(13) Next.js(12) 安卓(11) typescript(10) 资料精选(10) NLP(10) 第三方Cookie(9) Redwoodjs(9) LLMOps(9) Go语言中级开发(9) 自然语言处理(9) 聊天机器人(9) PostgreSQL(9) 区块链(9) mlops(9) 安全(9) 全栈开发(8) ChatGPT(8) OpenAI(8) Linux(8) AI(8) GraphQL(8) iOS(8) 软件架构(7) Go语言高级开发(7) AWS(7) C++(7) 数据科学(7) whisper(6) Prisma(6) 隐私保护(6) RAG(6) JSON(6) DevOps(6) 数据可视化(6) wasm(6) 计算机视觉(6) 算法(6) Rust(6) 微服务(6) 隐私沙盒(5) FedCM(5) 语音识别(5) Angular开发(5) 快速应用开发(5) 提示工程(5) Agent(5) LLaMA(5) 低代码开发(5) Go测试(5) gorm(5) REST API(5) 推荐系统(5) WebAssembly(5) GameDev(5) CMS(5) CSS(5) machine-learning(5) 机器人(5) 游戏开发(5) Blockchain(5) Web安全(5) Kotlin(5) 低代码平台(5) 机器学习资源(5) Go资源(5) Nodejs(5) PHP(5) Swift(5) 智能体(4) devin(4) Blitz(4) javascript框架(4) Redwood(4) GDPR(4) 生成式人工智能(4) Angular16(4) Alpaca(4) SAML(4) JWT(4) JSON处理(4) Go并发(4) kafka(4) 移动开发(4) 移动应用(4) security(4) 隐私(4) spring-boot(4) 物联网(4) nextjs(4) 网络安全(4) API(4) Ruby(4) 信息安全(4) flutter(4) 专家智能体(3) Chrome(3) CHIPS(3) 3PC(3) SSE(3) 人工智能软件工程师(3) LLM Agent(3) Remix(3) Ubuntu(3) GPT4All(3) 软件开发(3) 问答系统(3) 开发工具(3) 最佳实践(3) RxJS(3) SSR(3) Node.js(3) Dolly(3) 移动应用开发(3) 编程语言(3) 低代码(3) IAM(3) Web框架(3) CORS(3) 基准测试(3) Go语言数据库开发(3) Oauth2(3) 并发(3) 主题(3) Theme(3) earth(3) nginx(3) 软件工程(3) azure(3) keycloak(3) 生产力工具(3) gpt3(3) 工作流(3) C(3) jupyter(3) 认证(3) prometheus(3) GAN(3) Spring(3) 逆向工程(3) 应用安全(3) Docker(3) Django(3) R(3) .NET(3) 大数据(3) Hacking(3) 渗透测试(3) C++资源(3) Mac(3) 微信小程序(3) Python资源(3) JHipster(3) 大型语言模型(2) 语言模型(2) 可穿戴设备(2) JDK(2) SQL(2) Apache(2) Hashicorp Vault(2) Spring Cloud Vault(2) Go语言Web开发(2) Go测试工程师(2) WebSocket(2) 容器化(2) AES(2) 加密(2) 输入验证(2) ORM(2) Fiber(2) Postgres(2) Gorilla Mux(2) Go数据库开发(2) 模块(2) 泛型(2) 指针(2) HTTP(2) PostgreSQL开发(2) Vault(2) K8s(2) Spring boot(2) R语言(2) 深度学习资源(2) 半监督学习(2) semi-supervised-learning(2) architecture(2) 普罗米修斯(2) 嵌入模型(2) productivity(2) 编码(2) Qt(2) 前端(2) Rust语言(2) NeRF(2) 神经辐射场(2) 元宇宙(2) CPP(2) 数据分析(2) spark(2) 流处理(2) Ionic(2) 人体姿势估计(2) human-pose-estimation(2) 视频处理(2) deep-learning(2) kotlin语言(2) kotlin开发(2) burp(2) Chatbot(2) npm(2) quantum(2) OCR(2) 游戏(2) game(2) 内容管理系统(2) MySQL(2) python-books(2) pentest(2) opengl(2) IDE(2) 漏洞赏金(2) Web(2) 知识图谱(2) PyTorch(2) 数据库(2) reverse-engineering(2) 数据工程(2) swift开发(2) rest(2) robotics(2) ios-animation(2) 知识蒸馏(2) 安卓开发(2) nestjs(2) solidity(2) 爬虫(2) 面试(2) 容器(2) C++精选(2) 人工智能资源(2) Machine Learning(2) 备忘单(2) 编程书籍(2) angular资源(2) 速查表(2) cheatsheets(2) SecOps(2) mlops资源(2) R资源(2) DDD(2) 架构设计模式(2) 量化(2) Hacking资源(2) 强化学习(2) flask(2) 设计(2) 性能(2) Sysadmin(2) 系统管理员(2) Java资源(2) 机器学习精选(2) android资源(2) android-UI(2) Mac资源(2) iOS资源(2) Vue资源(2) flutter资源(2) JavaScript精选(2) JavaScript资源(2) Rust开发(2) deeplearning(2) RAD(2)

A curated list of AI security resources inspired by awesome-adversarial-machine-learning & awesome-ml-for-cybersecurity.

Legend:

Type Icon
Research
Slides
Video
Website / Blog post
Code
Other

Keywords:

 Adversarial examples

Type Title
Explaining and Harnessing Adversarial Examples
Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples
Delving into Transferable Adversarial Examples and Black-box Attacks
On the (Statistical) Detection of Adversarial Examples
The Space of Transferable Adversarial Examples
Adversarial Attacks on Neural Network Policies
Adversarial Perturbations Against Deep Neural Networks for Malware Classification
Crafting Adversarial Input Sequences for Recurrent Neural Networks
Practical Black-Box Attacks against Machine Learning
Adversarial examples in the physical world
Robust Physical-World Attacks on Deep Learning Models
Can you fool AI with adversarial examples on a visual Turing test?
Synthesizing Robust Adversarial Examples
Defensive Distillation is Not Robust to Adversarial Examples
Vulnerability of machine learning models to adversarial examples
Adversarial Examples for Evaluating Reading Comprehension Systems
Adversarial Examples and Adversarial Training by Ian Goodfellow at Stanford
Tactics of Adversarial Attack on Deep Reinforcement Learning Agents
Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey
Did you hear that? Adversarial Examples Against Automatic Speech Recognition
Adversarial Manipulation of Deep Representations
Exploring the Space of Adversarial Images
Note on Attacking Object Detectors with Adversarial Stickers
Adversarial Patch
LOTS about Attacking Deep Features
Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN
Adversarial Images for Variational Autoencoders
Delving into adversarial attacks on deep policies
Simple Black-Box Adversarial Perturbations for Deep Networks
DeepFool: a simple and accurate method to fool deep neural networks

 Evasion

Type Title
Query Strategies for Evading Convex-Inducing Classifiers
Evasion attacks against machine learning at test time
Automatically Evading Classifiers A Case Study on PDF Malware Classifiers
Looking at the Bag is not Enough to Find the Bomb: An Evasion of Structural Methods for Malicious PDF Files Detection
Generic Black-Box End-to-End Attack against RNNs and Other API Calls Based Malware Classifiers
Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition
Fast Feature Fool: A data independent approach to universal adversarial perturbations
One pixel attack for fooling deep neural networks
Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition
RHMD: Evasion-Resilient Hardware Malware Detectors

 Poisoning

Type Title
  Poisoning Behavioral Malware Clustering
Efficient Label Contamination Attacks Against Black-Box Learning Models
Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization

 Feature selection

Type Title
  Is Feature Selection Secure against Training Data Poisoning?

 Misc

Type Title
Can Machine Learning Be Secure?
On The Integrity Of Deep Learning Systems In Adversarial Settings
Stealing Machine Learning Models via Prediction APIs
Data Driven Exploratory Attacks on Black Box Classifiers in Adversarial Domains
Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures
A Methodology for Formalizing Model-Inversion Attacks
Adversarial Attacks against Intrusion Detection Systems: Taxonomy, Solutions and Open Issues
Adversarial Data Mining for Cyber Security
High Dimensional Spaces, Deep Learning and Adversarial Examples
Neural Networks in Adversarial Setting and Ill-Conditioned Weight Space
Adversarial Machines
Adversarial Task Allocation
Vulnerability of Deep Reinforcement Learning to Policy Induction Attacks
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Adversarial Robustness: Softmax versus Openmax
DEF CON 25 - Hyrum Anderson - Evading next gen AV using AI
Adversarial Learning for Good: My Talk at #34c3 on Deep Learning Blindspots
Universal adversarial perturbations
Camouflage from face detection - CV Dazzle

 Code

Type Title
CleverHans - Python library to benchmark machine learning systems vulnerability to adversarial examples
Model extraction attacks on Machine-Learning-as-a-Service platforms
Foolbox - Python toolbox to create adversarial examples
Adversarial Machine Learning Library(Ad-lib)
Deep-pwning
DeepFool
Universal adversarial perturbations
Malware Env for OpenAI Gym
Exploring the Space of Adversarial Images
StringSifter - A machine learning tool that ranks strings based on their relevance for malware analysis

 Links

Type Title
EvadeML - Machine Learning in the Presence of Adversaries
Adversarial Machine Learning - PRA Lab
Adversarial Examples and their implications

原文:https://github.com/DeepSpaceHarbor/Awesome-AI-Security