id int64 0 322 | paper_ID stringlengths 12 12 | paper_title stringlengths 26 144 | publication_date stringdate 2024-09-12 00:00:00 2025-08-12 00:00:00 | update_date stringdate 2024-09-12 00:00:00 2025-08-12 00:00:00 | image imagewidth (px) 1.14k 1.7k |
|---|---|---|---|---|---|
0 | 2508.09071v1 | GeoVLA_ Empowering 3D Representations in Vision-Language-Action Models | 2025-08-12 | 2025-08-12 | |
1 | 2508.08508v1 | Re_Verse -- Can Your VLM Read a Manga_ | 2025-08-11 | 2025-08-11 | |
2 | 2508.07501v1 | FormCoach_ Lift Smarter_ Not Harder | 2025-08-10 | 2025-08-10 | |
3 | 2508.07023v1 | MV-CoRe_ Multimodal Visual-Conceptual Reasoning for Complex Visual Question Answering | 2025-08-09 | 2025-08-09 | |
4 | 2508.06350v1 | Aligning Effective Tokens with Video Anomaly in Large Language Models | 2025-08-08 | 2025-08-08 | |
5 | 2508.05899v1 | HOLODECK 2_0_ Vision-Language-Guided 3D World Generation with Editing | 2025-08-07 | 2025-08-07 | |
6 | 2508.04931v1 | INTENTION_ Inferring Tendencies of Humanoid Robot Motion Through Interactive Intuition and Grounded VLM | 2025-08-06 | 2025-08-06 | |
7 | 2508.03967v1 | RAVID_ Retrieval-Augmented Visual Detection_ A Knowledge-Driven Approach for AI-Generated Image Identification | 2025-08-05 | 2025-08-05 | |
8 | 2508.02917v1 | Following Route Instructions using Large Vision-Language Models_ A Comparison between Low-level and Panoramic Action Spaces | 2025-08-04 | 2025-08-04 | |
9 | 2508.01943v1 | ROVER_ Recursive Reasoning Over Videos with Vision-Language Models for Embodied Tasks | 2025-08-03 | 2025-08-03 | |
10 | 2508.01408v1 | Artificial Intelligence and Misinformation in Art_ Can Vision Language Models Judge the Hand or the Machine Behind the Canvas_ | 2025-08-02 | 2025-08-02 | |
11 | 2508.01057v2 | Edge-Based Multimodal Sensor Data Fusion with Vision Language Models _VLMs_ for Real-time Autonomous Vehicle Accident Avoidance | 2025-08-01 | 2025-08-12 | |
12 | 2508.03737v1 | GanitBench_ A bi-lingual benchmark for evaluating mathematical reasoning in Vision Language Models | 2025-07-31 | 2025-07-31 | |
13 | 2507.23134v1 | Details Matter for Indoor Open-vocabulary 3D Instance Segmentation | 2025-07-30 | 2025-07-30 | |
14 | 2507.22958v1 | CHECK-MAT_ Checking Hand-Written Mathematical Answers for the Russian Unified State Exam | 2025-07-29 | 2025-07-29 | |
15 | 2507.21335v1 | Analyzing the Sensitivity of Vision Language Models in Visual Question Answering | 2025-07-28 | 2025-07-28 | |
16 | 2507.20409v1 | Cognitive Chain-of-Thought_ Structured Multimodal Reasoning about Social Situations | 2025-07-27 | 2025-07-27 | |
17 | 2507.20077v1 | The Devil is in the EOS_ Sequence Training for Detailed Image Captioning | 2025-07-26 | 2025-07-26 | |
18 | 2507.19679v1 | Efficient Learning for Product Attributes with Compact Multimodal Models | 2025-07-25 | 2025-07-25 | |
19 | 2507.18743v1 | SAR-TEXT_ A Large-Scale SAR Image-Text Dataset Built with SAR-Narrator and Progressive Transfer Learning | 2025-07-24 | 2025-07-24 | |
20 | 2507.17722v1 | BetterCheck_ Towards Safeguarding VLMs for Automotive Perception Systems | 2025-07-23 | 2025-07-23 | |
21 | 2507.17050v1 | Toward Scalable Video Narration_ A Training-free Approach Using Multimodal Large Language Models | 2025-07-22 | 2025-07-22 | |
22 | 2507.16856v1 | SIA_ Enhancing Safety via Intent Awareness for Vision-Language Models | 2025-07-21 | 2025-07-21 | |
23 | 2507.15025v1 | Survey of GenAI for Automotive Software Development_ From Requirements to Executable Code | 2025-07-20 | 2025-07-20 | |
24 | 2507.15882v2 | Document Haystack_ A Long Context Multimodal Image_Document Understanding Vision LLM Benchmark | 2025-07-18 | 2025-08-04 | |
25 | 2507.13568v2 | LoRA-Loop_ Closing the Synthetic Replay Cycle for Continual VLM Learning | 2025-07-17 | 2025-07-29 | |
26 | 2507.12644v1 | VLMgineer_ Vision Language Models as Robotic Toolsmiths | 2025-07-16 | 2025-07-16 | |
27 | 2507.11730v1 | Seeing the Signs_ A Survey of Edge-Deployable OCR Models for Billboard Visibility Analysis | 2025-07-15 | 2025-07-15 | |
28 | 2507.10548v1 | EmbRACE-3K_ Embodied Reasoning and Action in Complex Environments | 2025-07-14 | 2025-07-14 | |
29 | 2507.09615v1 | Towards Fine-Grained Adaptation of CLIP via a Self-Trained Alignment Score | 2025-07-13 | 2025-07-13 | |
30 | 2507.09209v1 | Uncertainty-Driven Expert Control_ Enhancing the Reliability of Medical Vision-Language Models | 2025-07-12 | 2025-07-12 | |
31 | 2507.08982v1 | VIP_ Visual Information Protection through Adversarial Attacks on Vision-Language Models | 2025-07-11 | 2025-07-11 | |
32 | 2507.07939v2 | SAGE_ A Visual Language Model for Anomaly Detection via Fact Enhancement and Entropy-aware Alignment | 2025-07-10 | 2025-07-22 | |
33 | 2507.07317v2 | ADIEE_ Automatic Dataset Creation and Scorer for Instruction-Guided Image Editing Evaluation | 2025-07-09 | 2025-07-28 | |
34 | 2507.08030v1 | A Systematic Analysis of Declining Medical Safety Messaging in Generative AI Models | 2025-07-08 | 2025-07-08 | |
35 | 2507.05515v2 | LEGO Co-builder_ Exploring Fine-Grained Vision-Language Modeling for Multimodal LEGO Assembly Assistants | 2025-07-07 | 2025-07-23 | |
36 | 2507.04524v1 | VLM-TDP_ VLM-guided Trajectory-conditioned Diffusion Policy for Robust Long-Horizon Manipulation | 2025-07-06 | 2025-07-06 | |
37 | 2507.04107v2 | VICI_ VLM-Instructed Cross-view Image-localisation | 2025-07-05 | 2025-07-22 | |
38 | 2507.13361v1 | VLMs have Tunnel Vision_ Evaluating Nonlocal Visual Reasoning in Leading VLMs | 2025-07-04 | 2025-07-04 | |
39 | 2507.03123v1 | Towards a Psychoanalytic Perspective on VLM Behaviour_ A First-step Interpretation with Intriguing Observations | 2025-07-03 | 2025-07-03 | |
40 | 2507.02190v1 | cVLA_ Towards Efficient Camera-Space VLAs | 2025-07-02 | 2025-07-02 | |
41 | 2507.02001v1 | Temporal Chain of Thought_ Long-Video Understanding by Thinking in Frames | 2025-07-01 | 2025-07-01 | |
42 | 2507.00209v3 | SurgiSR4K_ A High-Resolution Endoscopic Video Dataset for Robotic-Assisted Minimally Invasive Procedures | 2025-06-30 | 2025-07-29 | |
43 | 2506.23329v1 | IR3D-Bench_ Evaluating Vision-Language Model Scene Understanding as Agentic Inverse Rendering | 2025-06-29 | 2025-06-29 | |
44 | 2507.02948v3 | DriveMRP_ Enhancing Vision-Language Models with Synthetic Motion Data for Motion Risk Prediction | 2025-06-28 | 2025-07-13 | |
45 | 2506.22636v1 | ReCo_ Reminder Composition Mitigates Hallucinations in Vision-Language Models | 2025-06-27 | 2025-06-27 | |
46 | 2506.21656v1 | Fine-Grained Preference Optimization Improves Spatial Reasoning in VLMs | 2025-06-26 | 2025-06-26 | |
47 | 2506.20832v1 | Leveraging Vision-Language Models to Select Trustworthy Super-Resolution Samples Generated by Diffusion Models | 2025-06-25 | 2025-06-25 | |
48 | 2507.02909v1 | Beyond Token Pruning_ Operation Pruning in Vision-Language Models | 2025-06-24 | 2025-06-24 | |
49 | 2506.19079v1 | Reading Smiles_ Proxy Bias in Foundation Models for Facial Emotion Recognition | 2025-06-23 | 2025-06-23 | |
50 | 2506.18140v1 | See-in-Pairs_ Reference Image-Guided Comparative Vision-Language Models for Medical Diagnosis | 2025-06-22 | 2025-06-22 | |
51 | 2506.17811v2 | RoboMonkey_ Scaling Test-Time Sampling and Verification for Vision-Language-Action Models | 2025-06-21 | 2025-07-07 | |
52 | 2506.17417v1 | Aha Moment Revisited_ Are VLMs Truly Capable of Self Verification in Inference-time Scaling_ | 2025-06-20 | 2025-06-20 | |
53 | 2506.16652v1 | CodeDiffuser_ Attention-Enhanced Diffusion Policy via VLM-Generated Code for Instruction Ambiguity | 2025-06-19 | 2025-06-19 | |
54 | 2506.15871v1 | Visual symbolic mechanisms_ Emergent symbol processing in vision language models | 2025-06-18 | 2025-06-18 | |
55 | 2506.14907v1 | PeRL_ Permutation-Enhanced Reinforcement Learning for Interleaved Vision-Language Reasoning | 2025-06-17 | 2025-06-17 | |
56 | 2506.14035v1 | SimpleDoc_ Multi-Modal Document Understanding with Dual-Cue Page Retrieval and Iterative Refinement | 2025-06-16 | 2025-06-16 | |
57 | 2506.12849v1 | CAPO_ Reinforcing Consistent Reasoning in Medical Decision-Making | 2025-06-15 | 2025-06-15 | |
58 | 2506.12409v1 | Branch_ or Layer_ Zeroth-Order Optimization for Continual Learning of Vision-Language Models | 2025-06-14 | 2025-06-14 | |
59 | 2506.11976v2 | How Visual Representations Map to Language Feature Space in Multimodal LLMs | 2025-06-13 | 2025-06-22 | |
60 | 2506.11234v1 | Poutine_ Vision-Language-Trajectory Pre-Training and Reinforcement Learning Post-Training Enable Robust End-to-End Autonomous Driving | 2025-06-12 | 2025-06-12 | |
61 | 2506.10202v1 | Q2E_ Query-to-Event Decomposition for Zero-Shot Multilingual Text-to-Video Retrieval | 2025-06-11 | 2025-06-11 | |
62 | 2506.17267v1 | CF-VLM_CounterFactual Vision-Language Fine-tuning | 2025-06-10 | 2025-06-10 | |
63 | 2506.08227v1 | A Good CREPE needs more than just Sugar_ Investigating Biases in Compositional Vision-Language Benchmarks | 2025-06-09 | 2025-06-09 | |
64 | 2506.07214v1 | Backdoor Attack on Vision Language Models with Stealthy Semantic Manipulation | 2025-06-08 | 2025-06-08 | |
65 | 2506.06884v1 | FREE_ Fast and Robust Vision Language Models with Early Exits | 2025-06-07 | 2025-06-07 | |
66 | 2506.06506v1 | Biases Propagate in Encoder-based Vision-Language Models_ A Systematic Analysis From Intrinsic Measures to Zero-shot Retrieval Outcomes | 2025-06-06 | 2025-06-06 | |
67 | 2506.05523v1 | MORSE-500_ A Programmatically Controllable Video Benchmark to Stress-Test Multimodal Reasoning | 2025-06-05 | 2025-06-05 | |
68 | 2506.04308v1 | RoboRefer_ Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics | 2025-06-04 | 2025-06-04 | |
69 | 2506.03371v1 | Toward Reliable VLM_ A Fine-Grained Benchmark and Framework for Exposure_ Bias_ and Inference in Korean Street Views | 2025-06-03 | 2025-06-03 | |
70 | 2506.01955v1 | Dual-Process Image Generation | 2025-06-02 | 2025-06-02 | |
71 | 2506.01203v1 | Self-Supervised Multi-View Representation Learning using Vision-Language Model for 3D_4D Facial Expression Recognition | 2025-06-01 | 2025-06-01 | |
72 | 2506.00613v1 | Evaluating Robot Policies in a World Model | 2025-05-31 | 2025-05-31 | |
73 | 2506.00238v1 | ZeShot-VQA_ Zero-Shot Visual Question Answering Framework with Answer Mapping for Natural Disaster Damage Assessment | 2025-05-30 | 2025-05-30 | |
74 | 2505.23977v1 | VisualSphinx_ Large-Scale Synthetic Vision Logic Puzzles for RL | 2025-05-29 | 2025-05-29 | |
75 | 2505.22946v1 | NegVQA_ Can Vision Language Models Understand Negation_ | 2025-05-28 | 2025-05-28 | |
76 | 2505.21771v1 | MMTBENCH_ A Unified Benchmark for Complex Multimodal Table Reasoning | 2025-05-27 | 2025-05-27 | |
77 | 2505.20444v1 | HoPE_ Hybrid of Position Embedding for Length Generalization in Vision-Language Models | 2025-05-26 | 2025-05-26 | |
78 | 2505.19358v1 | RoofNet_ A Global Multimodal Dataset for Roof Material Classification | 2025-05-25 | 2025-05-25 | |
79 | 2505.18792v1 | On the Dual-Use Dilemma in Physical Reasoning and Force | 2025-05-24 | 2025-05-24 | |
80 | 2505.20326v1 | Cultural Awareness in Vision-Language Models_ A Cross-Country Exploration | 2025-05-23 | 2025-05-23 | |
81 | 2505.16854v2 | Think or Not_ Selective Reasoning via Reinforcement Learning for Vision-Language Models | 2025-05-22 | 2025-05-23 | |
82 | 2505.15966v2 | Pixel Reasoner_ Incentivizing Pixel-Space Reasoning with Curiosity-Driven Reinforcement Learning | 2025-05-21 | 2025-05-26 | |
83 | 2506.11031v2 | Task-aligned prompting improves zero-shot detection of AI-generated images by Vision-Language Models | 2025-05-20 | 2025-06-16 | |
84 | 2505.13777v1 | Sat2Sound_ A Unified Framework for Zero-Shot Soundscape Mapping | 2025-05-19 | 2025-05-19 | |
85 | 2505.12493v2 | UIShift_ Enhancing VLM-based GUI Agents through Self-supervised Reinforcement Learning | 2025-05-18 | 2025-06-21 | |
86 | 2505.12045v1 | FIGhost_ Fluorescent Ink-based Stealthy and Flexible Backdoor Attacks on Physical Traffic Sign Recognition | 2025-05-17 | 2025-05-17 | |
87 | 2505.11758v1 | Generalizable Vision-Language Few-Shot Adaptation with Predictive Prompts and Negative Learning | 2025-05-16 | 2025-05-16 | |
88 | 2505.10664v1 | CLIP Embeddings for AI-Generated Image Detection_ A Few-Shot Study with Lightweight Classifier | 2025-05-15 | 2025-05-15 | |
89 | 2505.09498v1 | Flash-VL 2B_ Optimizing Vision-Language Model Performance for Ultra-Low Latency and High Throughput | 2025-05-14 | 2025-05-14 | |
90 | 2505.08910v2 | Behind Maya_ Building a Multilingual Vision Language Model | 2025-05-13 | 2025-05-15 | |
91 | 2505.08818v1 | Position_ Restructuring of Categories and Implementation of Guidelines Essential for VLM Adoption in Healthcare | 2025-05-12 | 2025-05-12 | |
92 | 2505.07062v1 | Seed1_5-VL Technical Report | 2025-05-11 | 2025-05-11 | |
93 | 2505.08803v1 | Multi-modal Synthetic Data Training and Model Collapse_ Insights from VLMs and Diffusion Models | 2025-05-10 | 2025-05-10 | |
94 | 2505.06413v1 | Natural Reflection Backdoor Attack on Vision Language Model for Autonomous Driving | 2025-05-09 | 2025-05-09 | |
95 | 2505.05635v1 | VR-RAG_ Open-vocabulary Species Recognition with RAG-Assisted Large Multi-Modal Models | 2025-05-08 | 2025-05-08 | |
96 | 2505.04787v2 | Replay to Remember _R2R__ An Efficient Uncertainty-driven Unsupervised Continual Learning Framework Using Generative Replay | 2025-05-07 | 2025-05-09 | |
97 | 2505.03703v1 | Fill the Gap_ Quantifying and Reducing the Modality Gap in Image-Text Representation Learning | 2025-05-06 | 2025-05-06 | |
98 | 2505.02569v1 | HapticVLM_ VLM-Driven Texture Recognition Aimed at Intelligent Haptic Interaction | 2025-05-05 | 2025-05-05 | |
99 | 2505.02056v1 | Handling Imbalanced Pseudolabels for Vision-Language Models with Concept Alignment and Confusion-Aware Calibrated Margin | 2025-05-04 | 2025-05-04 |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 7